Saturday, July 24, 2010

Books: Inside Larry and Sergey's Brain


Inside Larry and Sergey's Brin by Richard L. Brandt@2009


0. Introduction: The World's Librarians
Good luck. I've been trying to do that for some years. - Google CEO Eric Schmidt after being told the title of this book

0.1 Google is Ethical
0.2 Google Uses New Business Tactics
0.3 Google Stands Out
0.4 Google Has Unique Strengths
0.5 Google Sometimes Looks Evil

1. Arbiters of Cyberspace
Human salvation lies in the hands of the creatively maladjusted. -Martin Luther King, Jr.

1.1 Leftist
1.2 The Tinkerer
1.3 The Refusenik
1.4 The Math Prodigy
1.5 The Shire

2. Accidental Entrepreneurs
Eight percent of success is showing up. -Woody Allen

2.1 Finding Hidden Meaning
2.2 There Will Never be Another Yahoo
2.3 Who wants a Search Engine?
2.4 Finding Funding

3. Controlled Chaos
Innovators and men of genius have almost always been regarded as fools at the beginning (and very often at the end) of their careers. -Fyodor Dostoyevsky

The place where optimism most flourishes is the lunatic asylum. -Havelock Ellis

3.1 The Standford Brain Pool
3.2 Strange Management
3.3 No Experience Necessary
3.4 Two-class Culture
3.5 Shrinking Benefits

4. Larry and Sergey's Corporate Vision
He ne'er is crown'd. With immortality, who fears to follow. Where airy voices lead. -John Keats

4.1 Just another Stanford Thing
4.2 Simplicity in a Complex World
4.3 Focus on the User (Duh)
4.4 Controlling Chaos
4.5 Difficult Partners

5. Advertising for the Masses
6. A Heartbreaking IPO of Staggering Genius
7. The China Syndrome: Google as Big Brother
8. What About Privacy?
9. The Ruthless Librarians
10. The Google Cloud
11. Google, the Telephone Company?
12. Thinking Beyond Search: The World's Problems, Real and Fanciful

Friday, July 23, 2010

Books: Inside Steve's Brain


Inside Steve's Brain by Leander Kahney @2008


Introduction
1. Focus: How Saying "No" Saved Apple
"I'm looking for a fixer-upper with a solid foundation. Am willing to tear down walls, build bridges, and light fires. I have great experience, lots of energy, a bit of that 'vision thing' and I'm not afraid to star from the beginning." - Steve Job's resume at Apple's .Mac website

1.1 The Fall of Apple
1.2 Enter the iCEO
1.3 Steve's Survey
1.4 Apple's Assets
1.5 Getting "Steved"
1.6 Dr. No
1.7 Personal Focus

2. Despotism: Apple's One-Man Focus Group
"We made the buttons on the screen look so good you'll want to lick them." - Steve Jobs, on Mac OS X's user interface, Fortune, January 24, 2000

2.1 What's NeXT?
2.2 "You're a Bunch of Idiots"
2.3 No Detail Too Small
2.4 Simplifying the UI
2.5 Introducing OS X
2.6 Job's Design Process
2.7 Deceptive Simplicity

3. Perfectionism: Product Design and the Pursuit of Excellence
"Be a yardstick of quality. Some people aren't used to an environment where excellence is expected." - Steve Jobs

3.1 Job's Pursuit of Perfection
3.2 In the Beginning
3.3 Jobs Gets Design Religion
3.4 The Macintosh, Job's "Volkscomputer"
3.5 Unpacking Apple
3.6 The Great Washing Machine Debate
3.7 Jonathan Ive, the Designer
3.8 A Penchant for Prototyping
3.9 Ive's Design Process
3.10 Attention to Detail: Invisible Design
3.11 Materials and Manufacturing Processes

4. Elitism: Hire Only A Players, Fire the Bozos
"In our business, one person can't do anything anymore. You create a team of people around you." - Steve Jobs, Smithsonian Institution Oral and Video Histories

4.1 Pixar: Art is a Team Spot
4.2 The Original Mac Team
4.3 Small Is Beautiful
4.4 Job's Job
4.5 Pugilistic Partners
4.6 "Think Different"
4.7 Out-advertise the Competition
4.8 One More Thing: Coordinated Marketing Campaigns
4.9 The Secret of Secrecy
4.10 Personality Plus

5. Passion: Putting a Ding in the Universe
"I want to put a ding in the universe." - Steve Jobs

5.1 Ninety Hours a Week and Loving It
5.2 The Hero/Asshole Rollercoaster
5.3 A Wealth of Stock Options
5.4 Dangling the Carrot and the Stick
5.5 One of the Great Intimidators
5.6 Working with Jobs: There's Only One Steve

6. Inventive Spirit: Where Does the Innovation Come From?
"Innovation has nothing to do with how many R&D dollars you have. When Apple came up with the Mac, IBM was spending at least 100 times more on R&D. It's not about money. It's about the people you have, how you're led, and how much you get it." - Steve Jobs, in Fortune, November 9, 1998

6.1 An Appetite for Innovation
6.2 Product vs. Business Innovation: Apple Does Both
6.3 Where Does the Innovation Come From?
6.4 Job's Innovation Strategy: The Digital Hub
6.5 Products as Gravitational Force
6.6 Pure Science vs. Applied Science
6.7 The Seer - and Stealer
6.8 The Creative Connection
6.9 Flexible Thinking
6.10 An Apple Innovation Case Study: The Retail Stores
6.11 Enriching Lives Along the Way
6.12 Cozying on Up to the Genius Bar

7. Case Study: How It All Came Together with the iPod
"Software is the user experience. As the iPod and iTunes prove, it has become the driving technology not just of computers but of consumer electronics." - Steve Jobs

7.1 Revisiting the Digital Hub
7.2 Jobs's Misstep: Customers Wanted Music, Not Video
7.3 How the iPod Got Its Name: "Open the Pod Bay Door, Hal!"

8. Total Control: The Whole Widget
"I've always wanted to own and control the primary technology in everything we do." - Steve Jobs

8.1 Jobs as a Control Freak
8.2 Controlling the Whole Widget
8.3 The Virtues of Control Freakery: Stability, Security, and Ease-of-Use
8.4 The Systems Approach
8.5 The Return of Vertical Integration
8.6 The Zune and Xbox
8.7 What Consumers Want

Monday, May 31, 2010

Solaris - Useful commands

1. not on system console error messages

vi /etc/default/login
insert # before Console=/dev/console



2. Solaris 10 - enable NFS server

The NFS server service is dependent on a slew of other services. Manually enabling all of these services would be tedious. The svcadm command makes this simple with one command:

svcadm -v enable -r network/nfs/server

The -v option makes the command output verbose details about the services enabled. You can use the -t option (..enable -rt network…) to enable these services temporarily (so that they will not be automatically enabled when the system reboots). By default, enabling a service will enable it permanently (persistent across reboots until it is disabled).

3. Enable Backspace key

If you hit the Backspace key on your keyboard, you will get ^H. To enable Backspace, enter command "stty erase" and hit Backspace key after the word erase and Enter.

VMware - Windows XP setup cannot find any hard disk drives during installation

  • When installing Windows XP in a virtual machine on VMware ESX, Windows setup fails to detect hard disks in the virtual machine
  • You receive the following error:

    Setup did not find any hard disk drives installed in your computer.

    When installing Windows XP in a virtual machine setup is unable to find the hard drives because no compatible disk controller driver is shipped on the Windows XP setup disc. You must supply the correct drive during setup to proceed with installation.

    To ensure you supply the correct drive:

    1. When creating the new virtual machine, select the BusLogic option for the Virtual SCSI Controller mode.
    2. Download VMware SCSI drvier floppy image from http://download3.vmware.com/software/vmscsi-1.2.0.4.flp and upload to Datastore.
    3. Attach or insert the Windows XP installation media and connect it to the virtual machine.
    4. Power on the virtual machine and open a console view of the virtual machine.
    5. Click in the console to assign keyboard control to the virtual machine.
    6. When the blue Windows setup screen appears, press F6 when prompted.
    7. When prompted for additional drivers, press S.
    8. Attach VMware SCSI driver floppy image to virtual floppy drive.
    9. Press Enter to select the VMware SCSI Controller driver, and then Enter again to continue setup.
    10. Complete Windows XP setup normally from this point.
    11. After setup has completed the first phase of installation and restarts the virtual machine, you need to disconnect or unassign the virtual floppy drive or the virtual machine may attempt to boot from the floppy image.


Thursday, May 13, 2010

Avere - FXT Series - Tiered Storage appliance

Avere Systems Inc.'s FXT Series of tiered clustered network-attached storage (NAS) appliances with automated block-level storage across RAM, nonvolatile memory (NVRAM), Flash, Serial Attached SCSI (SAS) and SATA tiers, begin shipping on Oct. 15.

The FXT Series nodes are available in two models, the FXT 2300 and FXT 2500. Each FXT Series node contains 64 GB of read-only DRAM and 1 GB of battery-backed NVRAM. The FXT 2300, list priced at $52,000, contains 1.2 TB of 15,000 rpm SAS drives. The FXT 2500, at $72,000, contains 3.5 TB of SAS disk.The nodes can scale out under a global namespace. CEO Ron Bianchini said Avere has tested up to 25 nodes in a cluster internally and the largest cluster running at a beta testing site contains eight nodes, though there's no technical limitation on the number of nodes the cluster can support.

The clustered NAS system can be attached to third-party NFS NAS systems for archival and backup storage. "Any NFS Version 3 or above connecting over TCP is compatible," Bianchini said. Bianchini was CEO at Spinnaker Networks when NetApp bought the clustered NAS company in 2003.

Avere customers can set a data-retention schedule using a slider in the user interface to tell the FXT system how closely to synchronize the third-party SATA NFS device (which Avere calls "mass storage"). If it's set to zero, the FXT Series will ensure that writes to the mass storage device have been completed before acknowledging a write to the application. The slider can be pushed up to four hours, meaning mass storage will be up to four hours out of synch with the primary FXT Series.

Bianchini said two of eight beta sites are running with the retention policy set to zero. "The downside is that you don't get the same scale with writes as you do with reads" because the system has to wait for the SATA-based filer to respond before committing writes to primary storage, he said. "The environments using it this way aren't doing a lot of writes."

Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release.

Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data.

"If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed."

Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.Bianchini said the FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release.

Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data.

"If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed."

Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.


Gear6 - CacheFX - NAS Caching Appliance

Gear6's CacheFX appliances sit in the network and contain solid-state drives. They are compatible only with NFS-based NAS devices and can shorten response times for frequently used application data by storing it in memory. This concept isn't unique to individual disk systems, especially high-end arrays, but the appliance model can centralize management of performance and load balancing across NAS systems.

Although Gear6 calls its new G100 appliance an entry-level appliance, the 11U device's $149,000 price is hardly entry level. It is a smaller, less expensive version of Gear6's previous models, which start at $350,000 and come in 21U and 42U configurations.

According to Gear6 director of marketing Jack O'Brien, the new model will also be better suited to run on performance-intensive applications with relatively moderate amounts of data, such as Oracle RAC.

"If performance requirements aren't particularly stringent, the appliance also offers an opportunity to simplify the storage environment," O'Brien added. In late January, Gear6 released new management software for the appliances that includes an I/O monitoring feature to track traffic and proactively identify bottlenecks.

Dataram - XcelaSAN - SAN Accleration appliance

Dataram Corp.'s XcelaSAN is a new storage-area network (SAN) acceleration appliance that uses Flash and DRAM memory to cache active blocks for improved performance; it can also front any disk array without requiring changes to the storage or server environment.

XcelaSAN is a caching appliance that sits between a Fibre Channel (FC) switch and storage array. XcelaSAN automatically brings the most frequently used blocks of data in an FC SAN into DRAM and then NAND to speed performance. It works with any vendor's FC SAN, according to Dataram chief technologist Jason Caulkins. Unlike disk array-based solid-state drives (SSDs), the XcelaSAN isn't intended to be persistent storage. The appliance moves data to back-end hard disk drives.

The product is the first NAND product the memory controller maker has rolled out, although Caulkins said the 42-year-old company designed a kind of proto-SSD with a disk drive interface in 1976.

After selling main memory only for approximately three decades, Dataram acquired a company called Cenatek Inc. in 2008. Cenatek designed, built, manufactured and sold PCI-based solid-state drives, Caulkins said, and began devising a new product to compete in the growing solid-state storage market.

XcelaSAN is the result of that acquisition, combining Cenatek's standalone direct-attach SSD IP with Dataram's memory controllers and DRAM into a 2U network device. The product holds 128 GB of RAM cache and 360 GB of Flash, and can be clustered into high-availability pairs and stacked for capacity scaling. Each appliance costs $65,000. Dataram claims the device can perform at 450,000 IOPS or 3 GBps throughput.

It's similar in architecture to Gear6's NFS read caching device, but supports block storage and write caching as well. Another similar product is NetApp Inc.'sFlexCache, which is also focused on NFS and NetApp storage, although it can theoretically be combined with NetApp's V-Series storage virtualization gateway to front heterogeneous storage.

Caulkins argued that Dataram's block-based approach, combined with a caching appliance rather than array-based SSDs, is the most efficient use of Flash. EMC Corp. and others argue that the presence of an SSD makes the entire network loop between server and storage array faster, while Fusion-io Inc. takes another tack claiming Flash make the most sense as close to the server bus as possible.

"It's not cost-effective to put all of your data on SSDs," Caulkins said. "It's better to be able to immediately impact performance without changing files, moving data or figuring out what to put on the SSD."