
Saturday, July 24, 2010
Books: Inside Larry and Sergey's Brain

Friday, July 23, 2010
Books: Inside Steve's Brain

Monday, May 31, 2010
Solaris - Useful commands
vi /etc/default/login
insert # before Console=/dev/console
The NFS server service is dependent on a slew of other services. Manually enabling all of these services would be tedious. The svcadm command makes this simple with one command:
svcadm -v enable -r network/nfs/server
The -v option makes the command output verbose details about the services enabled. You can use the -t option (..enable -rt network…) to enable these services temporarily (so that they will not be automatically enabled when the system reboots). By default, enabling a service will enable it permanently (persistent across reboots until it is disabled).
3. Enable Backspace key
If you hit the Backspace key on your keyboard, you will get ^H. To enable Backspace, enter command "stty erase" and hit Backspace key after the word erase and Enter.
VMware - Windows XP setup cannot find any hard disk drives during installation
- When installing Windows XP in a virtual machine on VMware ESX, Windows setup fails to detect hard disks in the virtual machine
- You receive the following error:
Setup did not find any hard disk drives installed in your computer.When installing Windows XP in a virtual machine setup is unable to find the hard drives because no compatible disk controller driver is shipped on the Windows XP setup disc. You must supply the correct drive during setup to proceed with installation.To ensure you supply the correct drive:
- When creating the new virtual machine, select the BusLogic option for the Virtual SCSI Controller mode.
- Download VMware SCSI drvier floppy image from http://download3.vmware.com/software/vmscsi-1.2.0.4.flp and upload to Datastore.
- Attach or insert the Windows XP installation media and connect it to the virtual machine.
- Power on the virtual machine and open a console view of the virtual machine.
- Click in the console to assign keyboard control to the virtual machine.
- When the blue Windows setup screen appears, press F6 when prompted.
- When prompted for additional drivers, press S.
- Attach VMware SCSI driver floppy image to virtual floppy drive.
- Press Enter to select the VMware SCSI Controller driver, and then Enter again to continue setup.
- Complete Windows XP setup normally from this point.
- After setup has completed the first phase of installation and restarts the virtual machine, you need to disconnect or unassign the virtual floppy drive or the virtual machine may attempt to boot from the floppy image.
Thursday, May 13, 2010
Avere - FXT Series - Tiered Storage appliance
The FXT Series nodes are available in two models, the FXT 2300 and FXT 2500. Each FXT Series node contains 64 GB of read-only DRAM and 1 GB of battery-backed NVRAM. The FXT 2300, list priced at $52,000, contains 1.2 TB of 15,000 rpm SAS drives. The FXT 2500, at $72,000, contains 3.5 TB of SAS disk.The nodes can scale out under a global namespace. CEO Ron Bianchini said Avere has tested up to 25 nodes in a cluster internally and the largest cluster running at a beta testing site contains eight nodes, though there's no technical limitation on the number of nodes the cluster can support.
The clustered NAS system can be attached to third-party NFS NAS systems for archival and backup storage. "Any NFS Version 3 or above connecting over TCP is compatible," Bianchini said. Bianchini was CEO at Spinnaker Networks when NetApp bought the clustered NAS company in 2003.
Avere customers can set a data-retention schedule using a slider in the user interface to tell the FXT system how closely to synchronize the third-party SATA NFS device (which Avere calls "mass storage"). If it's set to zero, the FXT Series will ensure that writes to the mass storage device have been completed before acknowledging a write to the application. The slider can be pushed up to four hours, meaning mass storage will be up to four hours out of synch with the primary FXT Series.
Bianchini said two of eight beta sites are running with the retention policy set to zero. "The downside is that you don't get the same scale with writes as you do with reads" because the system has to wait for the SATA-based filer to respond before committing writes to primary storage, he said. "The environments using it this way aren't doing a lot of writes."
Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release. Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data. "If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed." Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release. Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data. "If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed." Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.
Gear6 - CacheFX - NAS Caching Appliance
Gear6's CacheFX appliances sit in the network and contain solid-state drives. They are compatible only with NFS-based NAS devices and can shorten response times for frequently used application data by storing it in memory. This concept isn't unique to individual disk systems, especially high-end arrays, but the appliance model can centralize management of performance and load balancing across NAS systems.
Although Gear6 calls its new G100 appliance an entry-level appliance, the 11U device's $149,000 price is hardly entry level. It is a smaller, less expensive version of Gear6's previous models, which start at $350,000 and come in 21U and 42U configurations.
According to Gear6 director of marketing Jack O'Brien, the new model will also be better suited to run on performance-intensive applications with relatively moderate amounts of data, such as Oracle RAC. "If performance requirements aren't particularly stringent, the appliance also offers an opportunity to simplify the storage environment," O'Brien added. In late January, Gear6 released new management software for the appliances that includes an I/O monitoring feature to track traffic and proactively identify bottlenecks.
Dataram - XcelaSAN - SAN Accleration appliance
XcelaSAN is a caching appliance that sits between a Fibre Channel (FC) switch and storage array. XcelaSAN automatically brings the most frequently used blocks of data in an FC SAN into DRAM and then NAND to speed performance. It works with any vendor's FC SAN, according to Dataram chief technologist Jason Caulkins. Unlike disk array-based solid-state drives (SSDs), the XcelaSAN isn't intended to be persistent storage. The appliance moves data to back-end hard disk drives.
The product is the first NAND product the memory controller maker has rolled out, although Caulkins said the 42-year-old company designed a kind of proto-SSD with a disk drive interface in 1976.
After selling main memory only for approximately three decades, Dataram acquired a company called Cenatek Inc. in 2008. Cenatek designed, built, manufactured and sold PCI-based solid-state drives, Caulkins said, and began devising a new product to compete in the growing solid-state storage market. XcelaSAN is the result of that acquisition, combining Cenatek's standalone direct-attach SSD IP with Dataram's memory controllers and DRAM into a 2U network device. The product holds 128 GB of RAM cache and 360 GB of Flash, and can be clustered into high-availability pairs and stacked for capacity scaling. Each appliance costs $65,000. Dataram claims the device can perform at 450,000 IOPS or 3 GBps throughput. It's similar in architecture to Gear6's NFS read caching device, but supports block storage and write caching as well. Another similar product is NetApp Inc.'sFlexCache, which is also focused on NFS and NetApp storage, although it can theoretically be combined with NetApp's V-Series storage virtualization gateway to front heterogeneous storage. Caulkins argued that Dataram's block-based approach, combined with a caching appliance rather than array-based SSDs, is the most efficient use of Flash. EMC Corp. and others argue that the presence of an SSD makes the entire network loop between server and storage array faster, while Fusion-io Inc. takes another tack claiming Flash make the most sense as close to the server bus as possible. "It's not cost-effective to put all of your data on SSDs," Caulkins said. "It's better to be able to immediately impact performance without changing files, moving data or figuring out what to put on the SSD."