Monday, May 31, 2010

Solaris - Useful commands

1. not on system console error messages

vi /etc/default/login
insert # before Console=/dev/console



2. Solaris 10 - enable NFS server

The NFS server service is dependent on a slew of other services. Manually enabling all of these services would be tedious. The svcadm command makes this simple with one command:

svcadm -v enable -r network/nfs/server

The -v option makes the command output verbose details about the services enabled. You can use the -t option (..enable -rt network…) to enable these services temporarily (so that they will not be automatically enabled when the system reboots). By default, enabling a service will enable it permanently (persistent across reboots until it is disabled).

3. Enable Backspace key

If you hit the Backspace key on your keyboard, you will get ^H. To enable Backspace, enter command "stty erase" and hit Backspace key after the word erase and Enter.

VMware - Windows XP setup cannot find any hard disk drives during installation

  • When installing Windows XP in a virtual machine on VMware ESX, Windows setup fails to detect hard disks in the virtual machine
  • You receive the following error:

    Setup did not find any hard disk drives installed in your computer.

    When installing Windows XP in a virtual machine setup is unable to find the hard drives because no compatible disk controller driver is shipped on the Windows XP setup disc. You must supply the correct drive during setup to proceed with installation.

    To ensure you supply the correct drive:

    1. When creating the new virtual machine, select the BusLogic option for the Virtual SCSI Controller mode.
    2. Download VMware SCSI drvier floppy image from http://download3.vmware.com/software/vmscsi-1.2.0.4.flp and upload to Datastore.
    3. Attach or insert the Windows XP installation media and connect it to the virtual machine.
    4. Power on the virtual machine and open a console view of the virtual machine.
    5. Click in the console to assign keyboard control to the virtual machine.
    6. When the blue Windows setup screen appears, press F6 when prompted.
    7. When prompted for additional drivers, press S.
    8. Attach VMware SCSI driver floppy image to virtual floppy drive.
    9. Press Enter to select the VMware SCSI Controller driver, and then Enter again to continue setup.
    10. Complete Windows XP setup normally from this point.
    11. After setup has completed the first phase of installation and restarts the virtual machine, you need to disconnect or unassign the virtual floppy drive or the virtual machine may attempt to boot from the floppy image.


Thursday, May 13, 2010

Avere - FXT Series - Tiered Storage appliance

Avere Systems Inc.'s FXT Series of tiered clustered network-attached storage (NAS) appliances with automated block-level storage across RAM, nonvolatile memory (NVRAM), Flash, Serial Attached SCSI (SAS) and SATA tiers, begin shipping on Oct. 15.

The FXT Series nodes are available in two models, the FXT 2300 and FXT 2500. Each FXT Series node contains 64 GB of read-only DRAM and 1 GB of battery-backed NVRAM. The FXT 2300, list priced at $52,000, contains 1.2 TB of 15,000 rpm SAS drives. The FXT 2500, at $72,000, contains 3.5 TB of SAS disk.The nodes can scale out under a global namespace. CEO Ron Bianchini said Avere has tested up to 25 nodes in a cluster internally and the largest cluster running at a beta testing site contains eight nodes, though there's no technical limitation on the number of nodes the cluster can support.

The clustered NAS system can be attached to third-party NFS NAS systems for archival and backup storage. "Any NFS Version 3 or above connecting over TCP is compatible," Bianchini said. Bianchini was CEO at Spinnaker Networks when NetApp bought the clustered NAS company in 2003.

Avere customers can set a data-retention schedule using a slider in the user interface to tell the FXT system how closely to synchronize the third-party SATA NFS device (which Avere calls "mass storage"). If it's set to zero, the FXT Series will ensure that writes to the mass storage device have been completed before acknowledging a write to the application. The slider can be pushed up to four hours, meaning mass storage will be up to four hours out of synch with the primary FXT Series.

Bianchini said two of eight beta sites are running with the retention policy set to zero. "The downside is that you don't get the same scale with writes as you do with reads" because the system has to wait for the SATA-based filer to respond before committing writes to primary storage, he said. "The environments using it this way aren't doing a lot of writes."

Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release.

Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data.

"If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed."

Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release.

Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data.

"If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed."

Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.


Gear6 - CacheFX - NAS Caching Appliance

Gear6's CacheFX appliances sit in the network and contain solid-state drives. They are compatible only with NFS-based NAS devices and can shorten response times for frequently used application data by storing it in memory. This concept isn't unique to individual disk systems, especially high-end arrays, but the appliance model can centralize management of performance and load balancing across NAS systems.

Although Gear6 calls its new G100 appliance an entry-level appliance, the 11U device's $149,000 price is hardly entry level. It is a smaller, less expensive version of Gear6's previous models, which start at $350,000 and come in 21U and 42U configurations.

According to Gear6 director of marketing Jack O'Brien, the new model will also be better suited to run on performance-intensive applications with relatively moderate amounts of data, such as Oracle RAC.

"If performance requirements aren't particularly stringent, the appliance also offers an opportunity to simplify the storage environment," O'Brien added. In late January, Gear6 released new management software for the appliances that includes an I/O monitoring feature to track traffic and proactively identify bottlenecks.

Dataram - XcelaSAN - SAN Accleration appliance

Dataram Corp.'s XcelaSAN is a new storage-area network (SAN) acceleration appliance that uses Flash and DRAM memory to cache active blocks for improved performance; it can also front any disk array without requiring changes to the storage or server environment.

XcelaSAN is a caching appliance that sits between a Fibre Channel (FC) switch and storage array. XcelaSAN automatically brings the most frequently used blocks of data in an FC SAN into DRAM and then NAND to speed performance. It works with any vendor's FC SAN, according to Dataram chief technologist Jason Caulkins. Unlike disk array-based solid-state drives (SSDs), the XcelaSAN isn't intended to be persistent storage. The appliance moves data to back-end hard disk drives.

The product is the first NAND product the memory controller maker has rolled out, although Caulkins said the 42-year-old company designed a kind of proto-SSD with a disk drive interface in 1976.

After selling main memory only for approximately three decades, Dataram acquired a company called Cenatek Inc. in 2008. Cenatek designed, built, manufactured and sold PCI-based solid-state drives, Caulkins said, and began devising a new product to compete in the growing solid-state storage market.

XcelaSAN is the result of that acquisition, combining Cenatek's standalone direct-attach SSD IP with Dataram's memory controllers and DRAM into a 2U network device. The product holds 128 GB of RAM cache and 360 GB of Flash, and can be clustered into high-availability pairs and stacked for capacity scaling. Each appliance costs $65,000. Dataram claims the device can perform at 450,000 IOPS or 3 GBps throughput.

It's similar in architecture to Gear6's NFS read caching device, but supports block storage and write caching as well. Another similar product is NetApp Inc.'sFlexCache, which is also focused on NFS and NetApp storage, although it can theoretically be combined with NetApp's V-Series storage virtualization gateway to front heterogeneous storage.

Caulkins argued that Dataram's block-based approach, combined with a caching appliance rather than array-based SSDs, is the most efficient use of Flash. EMC Corp. and others argue that the presence of an SSD makes the entire network loop between server and storage array faster, while Fusion-io Inc. takes another tack claiming Flash make the most sense as close to the server bus as possible.

"It's not cost-effective to put all of your data on SSDs," Caulkins said. "It's better to be able to immediately impact performance without changing files, moving data or figuring out what to put on the SSD."

Storspeed - SP5000 - NAS caching and monitoring appliance

Storspeed Inc. came out of stealth on Oct 2009 with its SP5000 network-attached storage (NAS) caching appliance designed to speed storage systems and report on their performance at a granular level without disrupting applications.

The SP5000 caching appliance is a 2U device containing 80 GB of DRAM and four drive slots for solid-state disks (SSDs). "We didn't want to go the Fusion-io or Gear6 route of using Flash on a card because we want to take advantage of the latest SSD technologies as they come out," said Mark Cree, Storspeed's CEO/president and founder.

Each SP5000 contains a 10 Gigabit Ethernet (1o GbE) switch and can be clustered up to six nodes with the first release. Inside is a Field Programmable Gate Array (FPGA) that gives the box the horsepower to do deep packet inspection on each packet of data sent over the local-area network (LAN) to any NFS or CIFS-connected NAS device, enabling users to set caching policies for particular application workloads, file types and individual virtual machines. The deep packet inspection also allows for detailed reporting on performance characteristics of the storage network.

The design can theoretically scale out to 256 nodes, Cree said, and Storspeed's future roadmap includes clusters of nine, 12 and 24 nodes. "There's probably no reason to go beyond 24 nodes," he added. Cree said Storspeed's internal testing showed a six-node cluster performing at up to 2 million IOPS and up to 4.2 GBps throughput.

Cree claims the differentiation for Storspeed is that it requires no changes to mount points within applications, and performs faster than Avere's and Gear6's system per appliance because it uses FPGAs and proprietary ASICs for processing rather than commodity processors. If the device fails, its internal Ethernet switch keeps applications' access to the back-end storage arrays intact.

Wednesday, May 12, 2010

EMC - Introduce Private Cloud V-PLEX

EMC specifically released two products at EMC World 2010 in Boston on Monday, V-Plex Local and V-Plex Metro. Both are available now, with prices starting at $77,000 for the on-premises solution and $26,000 for a subscription version of the product.

Both products are comprised of appliances, called engines, which consist of dual quad-core processors, a 32 GB cache and an 8 Gbps Fibre Channel (FC) connection with an InfiniBand internal network. VPlex utilizes technology obtained from YottaYotta about three years ago, though what once was YottaYotta's own operating system in its storage virtualization products has now been ported to Linux.

VPlex Local can hold up to four VPlex engines and 8,000 storage volumes for non-disruptive data migrations between EMC and non-EMC arrays in one data center.

V-Plex Metro offers the ability to connect two V-Plex Local storage clusters across separate locations (up to 100km/60 miles apart) and treat them like a single pool of storage. In 2011, EMC also plans to extend this concept to a larger regional solution (V-Plex Geo), and then eventually offer a global solution (V-Plex Global). The diagram below offers a look at the four flavors of V-Plex.

Below are couple diagrams that EMC provided to help conceptualize V-Plex. The first one shows that EMC sees this as a multi-platform, mutli-vendor strategy (rather than an integrated, vertical strategy). The second one shows how the primary goal of V-Plex is to enable seamless moving of virtual machines, applications, and data across different storage arrays and data centers.

Tuesday, May 11, 2010

What is LDAP?

The below information is taken from http://www.gracion.com/server/whatldap.html.

LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other programs use to look up information from a server.

Every email program has a personal address book, but how do you look up an address for someone who's never sent you email? How can an organization keep one centralized up-to-date phone book that everybody has access to?

That question led software companies such as Microsoft, IBM, Lotus, and Netscape to support a standard called LDAP. "LDAP-aware" client programs can ask LDAP servers to look up entries in a wide variety of ways. LDAP servers index all the data in their entries, and "filters" may be used to select just the person or group you want, and return just the information you want. For example, here's an LDAP search translated into plain English: "Search for all people located in Chicago whose name contains "Fred" that have an email address. Please return their full name, email, title, and description."

LDAP is not limited to contact information, or even information about people. LDAP is used to look up encryption certificates, pointers to printers and other services on a network, and provide "single signon" where one password for a user is shared between many services. LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm.

As a protocol, LDAP does not define how programs work on either the client or server side. It defines the "language" used for client programs to talk to servers (and servers to servers, too). On the client side, a client may be an email program, a printer browser, or an address book. The server may speak only LDAP, or have other methods of sending and receiving data—LDAP may just be an add-on method.

If you have an email program (as opposed to web-based email), it probably supports LDAP. Most LDAP clients can only read from a server. Search abilities of clients (as seen in email programs) vary widely. A few can write or update information, but LDAP does not include security or encryption, so updates usually requre additional protection such as an encrypted SSL connection to the LDAP server.

LDAP also defines: Permissions, set by the administrator to allow only certain people to access the LDAP database, and optionally keep certain data private. Schema: a way to describe the format and attributes of data in the server. For example: a schema entered in an LDAP server might define a "groovyPerson" entry type, which has attributes of "instantMessageAddress", and "coffeeRoastPreference". The normal attributes of name, email address, etc., would be inherited from one of the standard schemas, which are rooted in X.500 (see below).

LDAP was designed at the University of Michigan to adapt a complex enterprise directory system (called X.500) to the modern Internet. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service "for the rest of us."

LDAP servers exist at three levels: There are big public servers, large organizational servers at universities and corporations, and smaller LDAP servers for workgroups. Most public servers from around year 2000 have disappeared, although directory.verisign.com exists for looking up X.509 certificates. The idea of publicly listing your email address for the world to see, of course, has been crushed by spam.

While LDAP didn't bring us the worldwide email address book, it continues to be a popular standard for communicating record-based, directory-like data between programs.