Change for the Better
Leading edge technology to watchBill Weaver, Ph.D.
TNIST physicist David Wineland adjusts an ultraviolet laser beam used to manipulate ions in a high-vacuum “ion trap” used to ‘teleport’ the quantum state of one atom to another. ©Geoffrey Wheeler
Our new undergraduate degree curriculum, the Integrated Science, Business and Technology (ISBT) Program, recently produced its first alumnae. One of our graduates interned in the Change Management department of a leading pharmaceutical company. What a great title for a team charged with incorporating new technologies, regulations, upgrades, and best practices into their systems. As espoused by the ISBT Program, technology management teams must evaluate nascent scientific discoveries across multiple disciplines and employ the business management skills required to leverage these innovations into technological products. A crucial objective of our program is to remove any innate misocainea — “hatred of new ideas,” and replace it with the entrepreneurial principle of “change is an opportunity to create competitive advantage.”
Although the paint is barely dry on our latest courses, it is time to survey the scientific and technical landscape in search of leading edge discoveries and developments for infusion into our program for the upcoming year. The academic ivory tower can too often isolate its residents from the real world. However, with care, its vantage point can spy broad trends and currents within the frequently turbulent sea of change.
Small things matter
Teleportation takes place inside an ion trap made of gold electrodes deposited onto alumina. The trap area is the horizontal opening near the center of the image. Photo courtesy of NIST
In the world of electronics, “smaller is better.” Small devices fit in your hand and are lighter to lug around. Shorter transmission lengths make them faster, and lower resistances permit them to run cooler and longer. Device miniaturization has long been used as competitive advantage, but there is a limit to how small is “small.” For instance, the miniature keyboards and tiny screens of hand-held computing devices can promote cramped fingers and strained eyes. There is also a limit to how small the wires and resistors inside integrated circuits can shrink. The problem is not one of ergonomics but of quantum physics. While big “classical” objects have analog energies, small “quantum” objects are digital. The numerous electrons in the capacitor of a memory circuit produce a collective analog voltage representing a bit value of “1” or “0”. Individual atoms and photons have specific, digital, energy levels or quantum states that can be used to represent “1” or “0,” known as “quantum bits” or “Qubits” . At first glance, Qubits look custom-made for digital computing; no analog-to-digital conversion necessary. The actual technology is more involved.
The difference between energy levels is very small and Qubits have a habit of switching between 1 and 0 spontaneously unless their environment is strictly controlled. An additional characteristic is exclusive to quantum states — Qubits can have the value of 1 and 0 simultaneously; an aspect known as “quantum superposition” . This is a trait that sounds bizarre in the classical world but is a revolutionary advantage for “quantum computing.” Just as algebraic variables can represent a set of values with a single letter, an 8-Qubit “Qubyte” can represent 256 values in a single computation cycle. Independent of word length, large sets can be evaluated in the time it takes for the first round of a For Loop.
Even more exciting is the property of “quantum entanglement” . Separate Qubits can be “entangled” with each other and, regardless of physical separation, the remote entangled Qubit acquires the same value as the local one — forming the basis of a quantum information networking system. Practical solutions will involve both atomic and photonic Qubits requiring technology to convert between each type. This past October, scientists at the Georgia Institute of Technology provided such a solution  to transfer entangled information from atoms into photons and back again. Some rather large developments are on the horizon.
Totally tubularIncreasing in size from the atomic quantum scale, we find molecular structures composed of multiple atoms. Nanotubes  are layers of graphitic carbon rolled into nm-diameter tubes ranging in length from nanometers to millimeters. These short, strong, conductive cylinders can act like their meter-long automobile radio antenna cousins to receive and transmit terahertz frequency electromagnetic radiation having wavelengths of a few hundred nanometers, more commonly known as visible light. In March of last year, scientists at Brookhaven National Laboratory and IBM announced their ability to cause a single carbon nanotube to emit infrared radiation . While working to increase the efficiency and frequency of the devices to produce visible light, the developers suggest they may find their way into flat panel displays such as computer monitors and televisions, lighting applications and optical communications.
More recently, Physicists at Boston College have developed a Nanotube array receiver capable of converting light into electricity . It does so at nearly 80 percent efficiency, more than doubling the 30 percent efficiency of conventional solar cells. These arrays look very similar to the rods and cones of our optic nerves and may form the foundation of new, innovative light sensors. They also are examining broadband optical transmission systems using terahertz carrier frequencies. Combined with Brookhaven’s emitter, nanotubes may also serve as a practical foundation for optical computing.
Robots in disguise
Fuel liner analysis conducted for the NASA Engineering Safety Center and Return to Flight program (data, Cetin Kiris; image Timothy Sandstrom, NASA Ames Research Center)
Integrated circuits, such as general-purpose central processing units (CPUs), have undeniably changed the computing landscape. Much to IBM’s chagrin, they did not realize the importance of the software/hardware synergy when they agreed to let Microsoft handle the “incidental” operating system of their first PC. Having a CPU that executes generic software commands gives designers the freedom to develop varied and optimized designs. Unfortunately, the freedom to design changeable code comes at the cost of execution speed. Once an algorithm has been developed and optimized, such as that for a digital signal processing (DSP) application, the overhead of loading the instructions into the CPU and porting values back and forth between local caches and main memory presents a hard limit to how fast the code can execute. To overcome this limit, code jockeys have taken their optimized code and delivered it to an application specific integrated circuit (ASIC) foundry to have their design hardwired into a custom electronic circuit. This is expensive relative to the CPU solution and unchangeable once created, but worth the cost and inconvenience to obtain the fastest execution speeds available.
Simulation of large-scale cosmological structures representing the formation of the universe. Scientists believe the universe is 13.9 billion years old.
Companies such as Xilinx, Inc. have recognized the need for a hybrid solution — one that delivers speeds of an ASIC while maintaining the upgradeability of a CPU. These transformable integrated circuits are known as field-programmable gate arrays (FPGAs). The Xilinx “Virtex4” FPGA  offers up to 200,000 logic cells, embedded PowerPC processors, up to 10 Mb of embedded RAM, a 500 MHz clock, and up to 11.1 Gbps transceivers to input data and receive calculated results. Software vendors including National Instruments (NI)  provide toolkits such at “LabVIEW FPGA” to compile their code and write it directly onto the NI Reconfigurable I/O (RIO) FPGA module achieving near-ASIC speeds while using a high-level programming language. The resulting embedded LabVIEW code does not need to access a main processor to execute allowing parallel threads configured in independent logic cell sections to run truly in parallel. This toolkit permits the actual FPGA embedded code to be debugged and optimized in hardware without expensive chip-foundry services.
On trackFive years ago, the ISBT program decided to highlight the technology of the Uniform Code Council (UCC) uniform product code (UPC) symbol or “barcode” . Developed a little over 30 years ago, the concept of tagging merchandise with a machine-readable code has permitted superstores such as Wal-Mart and Home Depot to manage the inventory and sale of hundreds of thousands of products. It also permits our students to study the electronics and software required to create a barcode scanner and the business systems that utilize it for supply chain management.
The barcode is poised to give way to evolving Radio Frequency Identification (RFID) devices. RFID transponders were used during World War II in the Identification Friend or Foe (IFF) systems aboard allied aircraft and currently used in Lockheed’s EZ-Pass toll collection system and Mobil’s SpeedPass pay-at-the-pump system. Wal-Mart has recently decided to require its vendors to equip their merchandise with RFID tracking tags. Zebra Technologies  is a leading supplier of RFID and barcode systems and is providing low-cost RFID equipment. Greater than the innovative technology enabling the deployment of RFID on individual products, is the software and systems development necessary to utilize it.
Sounding very futuristic, RFID systems will permit shoppers to push their items through the checkout lane without removing them from the cart, while the RFID tag embedded in their debit card will automatically pay for their purchases. Delivery trucks, highway freight haulers and rail cars will pass through scanning check points to verify the contents, correlate the manifest with the measured mass of the vehicle for security, and allow the shipper to track the location of merchandise during transit. Once home, refrigerators and cabinets will scan each item to manage home inventory and alert the purchaser to expired food and contamination recalls. Ovens and microwaves will scan each item and configure themselves for optimal and safe food preparation. For each of these conveniences to come into fruition, marketing, networking, database and business-to-business systems will need to be designed, developed and deployed. For those programmers lamenting the transfer of payroll systems to off-shore data entry shops, we will be plenty busy inventing systems made possible by the world-wide adoption of RFID tracking technology.
Divide and conquerCombining massive networking technology and interoperable computation systems, “Grid Computing” is an evolving form of distributed computing that involves the coordination and sharing of computing, application, data, storage and network resources across changing and geographically dispersed organizations . Coincidentally occurring in 1973, the same year the barcode was developed, the Xerox Palo Alto Research Center (PARC) developed the first Ethernet network and the first distributed computing application. PARC scientists created a program that moved from machine to machine using idle resources. With the wide availability of the Internet, the “distributed.net” project known as “dnet” was launched in 1997 and became the first Internet distributed computing effort. Followed closely in 1999, the SETI@home project brought over two million systems to a project for the analysis of radio telescope signals.
Utilizing technology developed by these two pioneering applications, United Devices and Intel joined forces with Oxford University for the Cancer Research Project  to screen candidate cancer drugs and perform research on anthrax and smallpox. This past November, IBM and United Devices have joined forces to build a new global research grid called the “World Community Grid” . Its first task is the Human Proteome Folding Project to assist in the search for novel and effective treatments for diseases like cancer, HIV/AIDS, SARS and malaria. In addition to the targeted research itself, these efforts will impact networking and distributed computation technology.
Talk FastTo address the need for high-speed communication between computing components ranging from embedded systems, to personal computers and servers, to network equipment and supercomputers, the membership-based non-profit HyperTransport Consortium  was formed recently to convert the HyperTransport interconnection technology from a proprietary system to a widely adopted and royalty-free industry standard I/O technology. Lead by founding members, AMD, Alliance Semiconductor, Apple Computer Broadcom Corp., Cisco Systems, NVIDIA, PMC-Sierra, Sun Microsystems and Transmeta, the standard currently supports up to 32-bit wide links having a bandwidth of up to 11.2 GBps and a data throughput of up to 22.4 GBps. The technology is advancing the design of high performance computing applications such as supercomputers and cluster computing. Because it is compatible with current PCI and PCI-X subsystems, it is easily incorporated into PCs, workstations and servers. The standard’s enhanced low-power Low Voltage Differential Signaling (LVDS) uses two wires for each signal to overcome problems with single-ended signaling of high-speed parallel buses including bounced signals, interference and cross-talk. These features make it ideal for a wide range of embedded systems, from consumer electronics, printers and copiers, to industrial controls.
The big pictureWhen it comes to the future of tackling large models and simulations, the NASA Advanced Supercomputing (NAS) Division is providing solutions . NAS supercomputers are currently working on simulations of aerodynamic studies, ocean circulation, whole-earth climate modeling and flow studies. The Columbia Supercomputer at NAS is based on 20 SGI Altix 3700 superclusters, each with 512 processors, and 10,240 1.5-GHz Intel Itanium 2 processors. It boasts 20 terabytes total system RAM memory, 440 terabytes of Fibre Channel RAID online storage and an archive storage capacity of 10 petabytes. In addition to running NASA simulations, Columbia is available to the national science and engineering community. Simulation results produced by the NAS facility can be visualized on the “hyperwall,” a seven-by-seven cluster of flat panel screens, each driven by its own dual-processor computer and high-end graphics card. Each of the 49 computers can display, process and share data so a single image can be displayed across all screens or configured in individual cells like a massive spreadsheet. Large systems such as these require an army of highly-trained technical team members to maintain, program, administer and interpret the results from these massive systems. As their availability becomes more commonplace, additional simulation and modeling applications will permit companies to design and test their products virtually, reducing costs and increasing speed to market.
Summing it upIn conclusion, anyone who believes innovation is stagnant in a world that is simply going through the motions of an industrial, consumer-based economy has not taken the time to lift up their heads from their own grindstones. Opportunity and competitive advantage loom at every turn. We simply need the courage to change.
References1. see www.Qubit.org for more information.
2. Quantum Superposition of Distinct Macroscopic States, J. R. Friedman, V. Patel, W. Chen, S. K. Tolpygo and J. E. Lukens, Nature, 406, 43 (2000).
3. see, for example, plato.stanford.edu/entries/qt-entangle
4. D. N. Matsukevich and A. Kuzmich, Science, 306, 663 (2004).
5. see, for example, nanotube.msu.edu
Bill Weaver is an assistant professor in the Integrated Science, Business and Technology Program at LaSalle University. He may be contacted at firstname.lastname@example.org.