Engineering Quadrangle, Olden Street
Princeton, NJ 08544
Phone: 609.258.3500
Fax: 609.258.3745

 Research
  Research Areas
  Research Projects
  Princeton Summer Institute

Previous Projects - 1998

Author: Gyu-Sang Chang - Princeton University

Title: Feasibility of Building a Tabletop X-ray Microscope, Advisor Prof. S. Chou

There are several advantages to x-ray microscopy as opposed to other form of microscopy. Among these advantages are better theoretical resolution than visible light microscopy and better penetration of samples than electron microscopy. Unfortunately, the nature of x-rays places some significant physical constraints upon the design of an x-ray microscope. Specifically, the near unity index of refraction of x-rays in most medium requires the use of a zone plate (a plate with diffraction rings) to focus the beam of x-rays onto the target. The wavelength of the x-rays of interest requires the feature size of these zone plates to be in the nanometer range. The difficulty in manufacturing a zone plate with such tiny features contributes to the high cost of building an x-ray microscope. The fabrication process pioneered in the Nanostructure lab allows for inexpensive manufacture of zone plates. Unfortunately, an economical source of x-rays in the proper energy region has not yet been fully developed. The ability to manufacture zone plates at low cost, along with the development of a cheap source of sufficiently intense x-rays, could make inexpensive, tabletop x-ray microscopes a reality.


Author: Gwendylin Chen - Dartmouth

Title: The thermal cooling of Light-Emitting Flat-Panel Displays, Advisor - Prof. J. Sturm

During the first week of the program, we were oriented to the Engineering Quad, and the Princeton Summer Institute Program. Along with us, was also the Princeton Materials Institute sponsored by the Material Sciences Department. My first two weeks were spent setting up equipment, and repeating the experiments done by the summer student from last year. OLEDs (organic light-emitting diode) are glass coated with a layer of ITO (Indium Tin Oxide). ITO is a conductor, but glass is a resister, so when a current passes across the OLED, heat is generated. Our hypothesis is that when the OLED gets to a large enough dimension, the heat generated with melt the OLED. Our objective is to find a way to cool down the OLED so that bigger OLEDs can be built and used safely. I cut various sizes of OLEDs, varying from 1-cm squares to 20cm squares. I soldered thin strips of Indium solder on two opposite sides of the rectangles and squares. The solder was always on the ITO side of the glass, so that when the voltage was applied, it would spread evenly though the OLED. Then I clamped them up using rubber clamps, and applied 220W/m2 to each OLED according to size. At around 15 to 20 minutes, the OLED temperatures starts to increase less rapidly, and come to a plateau. I measured the temperature in the middle of the OLED, the sides and the corners. The temperatures on the side and corner of the OLED are lower than the temperature in the middle. This difference varied from 3 degrees in the small OLEDs to about 8 degrees in the larger OLEDs. Seeing this rapid increase in temperature difference as the size of the OLED increased, we tried cooling down the OLEDs by cooling down the side temperature in addition to normal convection and radiation. We found that within reasonable price range, aluminum and copper had the highest thermal conductivity. I bought aluminum and copper flashing from hardware stores and the roofing facilities maintenance department. I cut the thin pieces of metal into fins that framed the OLED, leaving 3-5 millimeters extra to solder the fins onto the OLEDs. Soldering the metal pieces onto the OLED was a big problem. I couldn't use regular solder because the OLED glass would not take that much heat, so I had to use Indium solder. However, Indium solder did not stick well with Aluminum and Copper. Soldering was fine with fins up to 5 cm in width. However, with anything bigger, the solder did not support the weight of the fin. The next thing I tried looking for an adhesive that had good thermal conductivity. The adhesive available that fit my criterion turned out to be hard to apply and required 8 hours of curing time. Toward the end of the summer, I finally decided to try regular epoxy. My hypothesis was if I applied a thin layer of epoxy, maybe thermal transfer isn't a problem. I experimented with the 3cm squares, and found my hypothesis was proved true. The thermal difference on the metal close to the line of epoxy and on the OLED close to the line of epoxy was zero to one degree Celsius. With consideration to the accuracy of the thermal couple and measurement errors, this difference was negligible. I did not have enough time to complete epoxying the fins onto OLEDs after I found out that regular epoxy works just well as soldering. However, this can be a perfect project for the next Princeton Summer Institute student.


Author: Nick Crudele - Polytechnic University

Title: Two-Dimensional Quantum Semiconductor Structures, Advisor - Prof. D. Tsui

During the summer of 1998 I engaged in a research internship program known as the Princeton Summer Institute at Princeton University. I was given the honor to join Professor Daniel C. Tsui's research group, which allowed me to study the physics, and fabrication of AlGaAs/GaAs based reduced-dimensional semiconductor heterostructures. In regards to the semiconductors studied, the physics of low temperature effects were studied. The most rewarding aspect to this program was its nature to incorporate theoretical assertions to practical matters. For example, to verify the occurrence of the Shubnikov-de Haas oscillations, one must prepare the sample and experimental apparatus in an appropriate manner. During this internship, there were a series of steps taken, which I believe led to a coherent plan for learning. Firstly, I read an echelon of publications relating to low dimensional physics before the program formally began. Entailed within the literature were both theoretical and experimental issues. Once I formally began my internship on June 14, I was introduced to the methodology in which samples where prepared. Having prepared the samples, it was then necessary to ensure proper functionality of the sample before running timely and costly cryogenic temperature tests of 4He (4.2K) and 3He(300mK). Finally, transport measurements were taken as a result of sweeping a perpendicular magnetic field to the sample. The results from the measurements allowed for the characterization of each sample such as resistively, carrier density, and mobility. In a nutshell, the measurements taken form the sample are essentially in conjunction with various aspects of the Hall effect (i.e., integral and fractional). Underlying the simple plots produced over the internship, mainly those of the magneto-resistance and the hall resistance, are magnificent and elegant correlations. It is noted that the system responds as a whole since for each and every plateau in the hall resistance corresponds to a to well developed dip in the magneto-resistance known as Shubnikov-de Haas Oscillation. Although the hall effect is one of the main foci of the group, it was necessary to dwell the field in search for theory that would compliment this phenomenon. For example, all of the contacts on my samples were implemented into the Van der Pauw geometry; therefore, the same configuration that was used to measure the magneto-resistance was also used to calculate the resistively through the Van der Pauw method. Aside from processing and testing of the samples, an apparatus, dipping probe, was designed and constructed to produce a magnetic field on the order of 5 Tesla for running quicker and cheaper tests as opposed to the closed-cycle He refrigerator. Unlike the He refrigerator, which can achieve temperatures of 300 mK, the apparatus is primarily used for the liquid state of helium of 4.2 K. The major pitfall, which brought my research to a complete halt, was the failure of the magnet, which was part of the dipping probe. The obvious result was that building another one consumed a great deal of time, but more importantly further testing could not take place. Since the use of the dipping probe is so simplistic, time saving, and even cost effective for running tests, serious thought should be given as to its implementation so that lower temperatures can be achieved, maybe even as project for next summer.


Author: Christine Coldwell - Princeton University

Title: Subsystem Design Issues in OTDM Networks, Advisor - Prof. P. Prucnal

For my NSF research this summer I worked in Professor Prucnal’s Lightwave Communications Lab at Princeton University. My research was focused on several subsystem design issues pertaining to the construction of a 100 Gbps optical time division multiplexed network. In a time division multiplexed network, precise timing and synchronization are essential for optimum performance since different transmitted channels consisting of binary encoded data are assigned a time slot and are interleaved together. Precise timing in a fiber optic network is inextricably linked to precise fiber lengths. For instance, in a 100 Gbps network, each channel has a time slot that is 10 ps wide. This means that the fiber lengths must be accurate on the order of microns. One aspect of my research involved precisely measuring the time delay of different fiber components in order to determine what fiber lengths were required to interconnect the components in the network. The technique I used to make this precision timing measurement involved viewing the output of a pulsed laser source on a digital oscilloscope and marking one of the pulses as a reference. The unknown fiber length to be measured was then inserted between the laser and the oscilloscope and a shift in the reference pulse was measured. By first using a low repetition rate laser at 1 MHz, a rough estimate of the fiber delay was determined to within 100 ps. A higher repetition picosecond pulsed laser at 2.5 GHz was then used to determine the delay to within picoseconds. The second aspect of my research focused on determining the sensitivity of different receivers in detecting picosecond pulses. Currently, most receivers are designed to detect NRZ laser light in which the presence of light, indicating a binary "one," occupies the entire time slot. However, since the 100 Gbps OTDM network is designed to use a pulsed laser source, the time slots which are 10ps wide contain a pulse of laser light that is only about 1 ps wide. Lucent’s OC-48 receiver and Hewlett Packard’s OC-48 receiver were evaluated. The Lucent receiver did not perform well with picosecond pulses as the input because of bandwidth limiting electronics that follow the avalanche photodetector. However, the Hewlett Packard receiver demonstrated remarkable sensitivity most likely because of a higher bandwidth detector and electronics. With incoming laser pulses at 2.5 Gbps and an average input power of -34 dbm to the Hewlett Packard OC-48 receiver there was only one error in a sequence of 1013 bits. Using a bit error rate tester to measure error ratios corresponding to different input powers I constructed a plot of the error ratio versus the average input power to the receiver. Using the analog output of the receiver and a digital oscilloscope I also compiled a collection of eye diagrams corresponding to the different error ratios. This evaluation of the receivers proved that the Hewlett Packard OC-48 met the receiver specification for the 100 Gpbs optical network.


Author: Irina Medvedev - George Mason University

Title: Signal Processing for Wireless Communications Multi-User Detection in Code-Division Multiple-Access (CDMA) Channels, Advisor - Prof. H. V. Poor

One of the many problems in wireless communications is the problem of multiple-access interference. This problem is encountered when 2 or more users access the same channel, and are due to the imperfect spread-spectrum signature waveforms of the users. The solution is multi-user detection in code-division multiple-access channels. Because the spread-spectrum signature waveforms are not orthogonal, the standard hard-decision of the matched-filter outputs is no longer sufficient. Thus, a different decision algorithm that uses the outputs of the matched-filter receiver together with the knowledge of the signature waveforms must be implemented. Two of the decision algorithms investigated were the Expectation-Maximization (EM) Algorithm and the Hidden Parameter EM (HPEM) Algorithm. After the simulation of the two iterative algorithms in Matlab, it was observed that when trying to identify a certain user's bits, the HPEM algorithm produced lower probability of error than the EM algorithm. Various initial stage decisions, such as conventional receiver, decorrelator receiver, and soft-decorrelator receiver, were also considered. It was concluded that the HPEM receiver performance with soft-decorrelator initial stage produced the best results, i.e. the lowest probability of error. After the determination of the receiver with the best performance, the dependence of the probability of error on the number of users (K), signal to noise ratio (SNR), and cross-correlation of the signature waveforms (rho) was investigated. It was determined that in an equi-correlated, equi-power case, there existed some rho, dependent on the number of users, at which the decorrelator decisions produced lower probability of error than the HPEM decisions. In addition, it was observed that the breakdown rho, at which the HPEM receiver performance is worse than the decorrelator receiver performance, decreased as the number of users increased and as the SNR decreased. Thus, for a certain number of users and SNR, as rho increased, the performance of both algorithms worsened. The HPEM algorithm, however, got worse at a faster rate than the decorrelator, which is why at some rho, the HPEM algorithm breaks down. One possible suggestion at an improvement is to use signature waveforms, which produce a negative cross-correlation coefficient, rho. It was observed that with a negative rho, the performance of the HPEM receiver improved significantly over the decorrelator for a large number of users. For a small number of users, the performance of both algorithms was about the same. The research in multi-user detection is an on-going development and is far from complete at this point. Multi-user detection is just one of many problems undergoing improvement in the world of wireless communications.


Author: Jessica Nelson - University of Illinois

Title: Integrated Circuit Fabrication for Undergraduate Laboratory Processes, Advisor - D. Marcy

The undergraduate laboratory that is included in ELE 208 consists of the fabrication of an integrated semiconductor chip. This chip contains several devices including solar cells and transistors. The changes that have been addressed this summer are dry etching using a plasma etching system, fabrication of NMOS chips in place of traditional PMOS chips, the self-aligned process, and polysilicon gates. Progress has been made in all of these changes this summer. In Experiment 1, NMOS chips processed with dry etching show favorable threshold voltages. In Experiment 2, the NMOS chip using the self-aligned process and a evaporated silicon gate tested well. Experiment 3 illustrated using a statistical method the most favorable pressure and power that etches oxide and has the highest selectivity. It was concluded that although progress was made in all aspects of the processes mentioned above, it is impossible to make any permanent alterations to the laboratory process over one summer.


Author: Krishnan Padmanabhan - University of Michigan

Title: Reliability of OLEDS and Triazoles as Electron Transport Layers, Advisor - Prof. S. Forrest

Through the course of my internship this summer, I have worked in research design as well as experimental analysis of Organic Light Emitting Devices. OLEDUs are displays created by applying thin films of organic materials to a substrate, which when voltage is applied across it, emits light. In the first portion of my work, I wrote software to aid in the research of the reliability of OLEDs. This entailed learning Visual Basic in order to provide a graphical user interface for individuals in the research lab, as well as allow my code to be integrated into an existing program which performed the actual reliability test. The current setup consists of a photodiode taking in data from the sample, and processing this information through hardware specifically designed for this research. Once this data is taken into a computer, which is done as the OLED is being tested, my program allows the data to be plotted and viewed as the researcher wishes. It also provides the capability to print and save professional, detailed graphs, for presentations and publications. This particular software was of interest to the individuals researching the reliability of OLEDs because of the increased efficiency that is obtained in determining the quality of devices produced, and documenting their work. The second portion of my work dealt with Electron Transport Layers of OLEDs. I examined a group of prospective ETLs, known as Triazoles, exploring their properties as thin films and as ETLs on actual devices. This work entailed the use of a bell jar, an elipsometer, and included exposure to a clean room. I began this portion of my work by creating thin films (500 angstroms) on silicon substrates for each of the seven proposed ETLs by way of high vacuum evaporation. After each of the seven films was created, I examined each using an elipsometer, which was housed within a clean room. The elipsometer provided me approximate thickness values, which could be used to calibrate the density of the film for each material individually. Upon completing the calculations of thickness I examined the construction of structure of each film using a microscope. When examining the surface configuration of a given film, a crystal lattice, shown by a dotted view under the microscope, reduced the chances of the ETL working successfully as an OLED while an amorphous structure, indicated by a smooth view under the microscope, indicated more probable success for the organic when laid on ITO. Of the seven samples with which I started, only four had amorphous structures. I proceeded by making an OLED for each of these samples, using one amorphous organic as the ETL on each. I also created a control OLED, which consisted of each of the layers necessary to create an OLED sans the ETL. Currently, these various samples are being compared through the testing of their respective devices. The data I have recorded in combination with test results will allow a conclusion as to which of the Triazoles perform well as an ETL and can therefore be used for further study.


Author: Mark Palmeri - Duke University

Title: Wet Process Deposition of Silver Cathodes for Electroluminescent Device Fabrication, Advisor - Prof. J. Sturm

Fabrication of cathodes for organic light emitting devices (OLEDs) typically involves evaporation of Mg:Ag contacts onto an organic polymer consisting of PVK/PBD/C6. It is desired to simplify cathode fabrication to an all wet process that can be applied towards ink-jet printing of OLEDs. Silver cathodes have been successfully deposited onto blended organic layers utilizing a redox reaction between Ag(NH3)2OH and sodium potassium tartrate, tetrahydrate (Rochelle salt), with the incorporation of a phase-transfer catalyst -- tetrabutylammonium tetrafluoroborate -- into the organic polymer. These devices yielded a luminescence of 87 cd/m^2 with a Von=20V, and a quantum efficiency of 0.02%.


Author: James Rice - George Mason University

Title: Analog VLSI performing Digital Signal Processing Algorithms, Advisor - Prof. H. V. Poor

Code-Division Multiple-Access (CDMA) receiver formats provide an efficient method of communication for cellular systems. With the use of Digital Signal Processing algorithms, many of the errors occurring in the received signal may be corrected and removed. These algorithms vary in complexity of implementation causing varying amounts of delay. One method of reducing these delays would to be to implement the signal processing algorithms using analog VLSI technology instead of the usual programmable digital signal processing technology. The use of the VLSI circuits would reduce the delays to the parasitic resistance and capacitance of the VLSI components. The algorithm used in this research is an Expectation Maximization Algorithm that has a complexity of O(K2), where K is the amount of users in the system.


Author: Peter Yeh - Yale University

Title: Roller NanoImprint Lithography, Advisor - Prof. S. Chou

Nanoimprint Lithography (NIL) is a revolutionary lithographic technique that offers sub-10 nm feature sizes, high-throughput, and low-cost - a feat currently impossible with conventional lithography methods. It relies on physical deformations of the resist rather than a chemical change as in the case of traditional lithography techniques. Nanoimprint Lithography has demonstrated 6-nm feature size, 70 nm pitch, vertical and smooth sidewalls, and nearly 90º corners. Further experimental study indicates that the ultimate resolution of nanoimprint lithography could be sub-5 nm. Two types of nanoimprint lithography are possible: flat nanoimprint lithography (NIL) and roller nanoimprint lithography (R-NIL). Compared with flat NIL, R-NIL has the advantage of a better uniformity, less force, and the ability to repeat a mask continuously on a large substrate. Two methods for R-NIL were developed: (1) rolling a cylinder mold on a flat, solid substrate; (2) putting a flat mold directly on a substrate and directing a smooth roller on top of it. Using our current roller nanoimprint system, sub-100 nm resolution in pattern transfer has been achieved.


Search this site:
Contents copyright © 2002
Princeton University
Department of Electrical Engineering
All rights reserved.