Looking to capitalize on the ability of organic semiconductors to host mobile ions, a Cornell University materials scientist propose bonding mobile ions to the surfaces of these semiconductors instead of doping. Circuitry made from organic polymers bonded to mobile ions can be optimized in ways that are impossible for doped semiconductors, opening the door to new functionality.
Monday, September 25, 2006
With optical processing migrating from exotic gallium arsenide devices to inexpensive silicon, Intel Corp. showed a research chip earlier this year that could do the world's first Raman lasing in a silicon waveguide. An all-silicon device, it had dynamically tunable wavelengths but was not very scalable, requiring an off-chip laser as an optical pump. Now, Intel is describing a scalable on-chip indium phosphide laser bonded to an all-silicon waveguide. Such an on-chip laser could supply the missing link between optics and electronics by performing both functions on the same photonic chips.
Monday, September 18, 2006
Ever wonder how blurry surveillance video images can be admissible as evidence in court? Software tools like Sarnoff Corp.'s VideoDetective mine the hidden data in such images to reconstruct their details clearly in still shots. But such tools are affordable only by large corporations and government agencies. Now a service called Sarensix permits private contractors to "farm out" the forensic evidence they gather from surveillance videos. The service was created by Sarnoff (Princeton, N.J.), a government contractor that fabricates custom ICs and ultrasmall video systems and software. Using data fused from video, infrared and other sensors, Sarnoff's security systems guard government installations and assist troops in the field. VideoDetective reconstructs video into stills that gather information from many frames, thereby creating sharp, telling still images from indistinct video. Smaller customers can use Sarensix to get their surveillance videos processed in VideoDetective by a Sarnoff-trained professional.
Monday, September 11, 2006
Yes, we're safer--but. It's the answer, with a caveat, to the question that will be on everyone's mind today. But from a technologist's perspective, there are other questions to ask about where we stand five years after the 9/11 attacks. The country has spent billions on technology up- grades to detect and defuse new threats. Have we invested wisely? Are the technologies being deployed effectively? What more can be done? After 9/11, there was an explosion of research and development in sensor technologies, several of which have been deployed. But other technologies are languishing in red tape, according to analysts.
As chip dimensions shrink, picometer variability among nanoscale dimensions and the uneven distribution of dopants stand in the way of further miniaturization. Use of a precisely designed organic molecule as the memory storage element could provide one solution, because the molecules could be mass-produced to be identical. Recently, the IBM Research Laboratory (Zurich, Switzerland) demonstrated one such molecule, which it claims can be electrically programmed to store a bit in two bistable states.
Monday, September 04, 2006
By mimicking the way a fly's brain interprets images coming in through its eyes, an algorithm created by a researcher at Australia's University of Adelaide lets digital cameras "see" more clearly. Today, all cameras must be adjusted to take only a part of the range of available information. Scenes that involve large differences in brightness between their shadows and highlights are particularly difficult to capture. The photographer can adjust the camera to capture either shadows or highlights, but cannot optimally capture both simultaneously. The human eye is similarly hampered, but it compensates by quickly adjusting the diameter of the pupil when scanning a scene--making it larger to take in shadow details, then smaller for taking in highlight details--so that people do not often notice that they can't view both simultaneously. Insect eyes, on the other hand, appear to be able to record both shadows and highlights at the same time. At the University of Adelaide, postdoctoral research fellow Russell Brinkworth tested this theory by directly recording images from the brain cells of a fly, then crafting an algorithm to mimic the observed behaviors. The result is an algorithm that can accept inputs from a camera's sensor, process it and recover information that would otherwise be lost, enabling the camera to record clear scenes with detail in both the shadowed areas and the highlights.
Digitizing three-dimensional objects today is a tedious process requiring the user to trace an object's outlines using a tethered stylus. And even after laboriously running the stylus over every nook and cranny, the user captures only the object's shape; its other properties cannot be determined. Now researchers with the Virtual Reality Lab of the State University of New York at Buffalo have created a thimble-like fingertip digitizer that not only eliminates the stylus but also captures the viscosity (hardness, homogeneity, texture) as well as the shape of an object. Further, the digitizer can double as a universal input device, allowing a machine to interpret a user's gestures.