Search Constraints
Filtering by:
Campus
Sacramento
Remove constraint Campus: Sacramento
Department
Electrical and Electronic Engineering
Remove constraint Department: Electrical and Electronic Engineering
Number of results to display per page
Search Results

21. A fuzzy approach for cell counting in poorly-illuminated images applied to a cell-phone microscope
- Creator:
- Rahimzadeh Soumesaraei, Mehdi
- Description:
- A blood cell count is a common diagnostic tool in medicine, and one way to obtain such a count is from an image of a blood smear. Researchers at the Center for Biophotonics Science and Technology (CBST) at the University of California, Davis have developed an attachment to convert a cell phone to a microscope. The images provided by this cell-phone microscope suffer from several artifacts, such as radial distortion and non-uniform illumination. It is desired to develop a software application for a smart phone to perform image processing and pattern recognition that can return an approximate blood count. In this work, prototype software has been developed on a personal computer (PC) that performs the whole procedure of image processing and pattern recognition to provide an approximate red blood cell count. To do the red blood cell count, images that are taken of a blood sample by a smart phone are transferred to a PC for processing. Radial distortion correction and cropping the defocused area of the image are done as pre-processing steps in preparation for robust cell recognition. Adaptive multi-level segmentation is performed as the second step to transform the image to a fuzzy scene, followed by the red cell recognition step. A fuzzy approach is taken for red cell recognition. The fuzzy approach presented in this work utilized fuzzy sets and not fuzzy logic. Adaptive image fuzzification and fuzzy criterion functions proposed in this thesis have higher performance than conventional counting methods. The proposed approach is robust against fuzziness of the image due to the poor quality of a cell phone image, taken under non-laboratory conditions. The recognition process in this application is a blind search method that is independent of manual calibration and learning. Most of this work has been dedicated to enhancing the algorithm of cell recognition even in poorly-illuminated images. This work focuses on red blood cell counting. However, the concept can be extended to other blood smear counting, such as white blood cells and platelets. This algorithm is tested on seven blood smear images, and the average values for precision and recall are 95.6 percent and 95.4 percent, respectively.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Rahimi-Ardabily, Ali
- Description:
- This thesis discusses the different strategies used to perform peak load shaving through the means of distributed generation and energy storage systems from the Utility’s perspective. Peak load shaving, sometimes referred to as load leveling or peak shifting, consists of the schemes used to eliminate the peaks and valleys in the load profile. This practice offers vast benefits to utilities in cost generation, line loss reduction, and volt support which is further discussed. Prior work for peak load shaving has been mainly focused on approaches such as linear and dynamic programming, and heuristic approaches such as particle-swarm optimization. The proposed algorithm here is based on a simple approach which compares the load profile with its average in a certain period and shares the charge/discharge among the energy storage devices based on defined weighting factors. In particular, the thesis focuses on the usage of Battery Energy Storage Systems (BESS) and distributed generations to accomplish this task. Results show that the proposed algorithm offers a simple, fast and effective way for peak-load shaving without heavy computational burdens often needed in other methods. As a result, it can be easily implemented in the Utility main substation for controlling the charge/discharge of storage devices throughout the distribution system.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Sanchez, Saul
- Description:
- Traumatic brain injury (TBI) is a serious health problem that can lead to permanent disability or death. A TBI may cause two major types of intra-cranial hemorrhage: subdural hematoma (SDH) and epidural hematoma (EDH). Subdural hematomas are the most common. Acute SDH/EDHs are associated with a high mortality rate, thus requiring immediate surgical treatment. Complications due to an SDH/EDH include seizures, temporary or permanent numbness, dizziness, headaches, coma, and death. The Glasgow Coma Scale (GCS) is the most commonly used method of diagnosis to determine if a person needs to be hospitalized to test for the presence of an SDH/EDH. Current technologies, computer tomography (CT) scans and magnetic resonance imaging (MRI), to detect an SDH/EDH require the patient to be hospitalized. Lawrence Livermore National Laboratory currently is developing a portable device that uses micro-power impulse radar (MIR) to help in the rapid detection of SDH/EDHs. The device, which is currently undergoing clinical trials, has successfully detected a large EDH. This thesis describes a phantom study performed to determine the possibility of detecting an intracranial hematoma as small as 1 cc using the device. If a small hematoma is diagnosed, the device would allow for constant monitoring for further volume growth. A bench top experiment used porcine brain tissue, blood, and the upper portion of a human skull to simulate a human head. A latex pouch containing blood was used to simulate an intracranial hematoma. The data obtained showed that the hematoma detector was able to detect an SDH as small as 1 cc. The hematoma volume was incremented in volume to observe the effects it had on the return signal. It was observed that as the hematoma volume was increased, the detected return signal amplitude was altered in a non-linear manner.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Alharbi, Kamal Abdullah
- Description:
- The limited resolution of a microscope is due to the diffraction limit, aperture and the optical lens. Superresolution (SR) methods improve resolution beyond the diffraction limit. Structured illumination (SI) is an SR method that helps acquire and fuse several non-redundant low-resolution (LR) images of the same object to produce a high-resolution (HR) image. In this thesis, an alternative method is developed and evaluated for fusing LR images obtained using SI to produce HR images. The method advocates the use of the L1 norm with total variation regularization to address the problem with existing image reconstruction using Wiener-like deconvolution. The method is applicable for reconstruction of grayscale images. The work also justifies some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. The work introduces Peak Signal to Standard Error of the Estimate Ratio (PSSEER) as a quantitative method of measuring image quality. Subjective and objective methods are consistent in showing that L1/TV optimization resolves more details than Wiener-like deconvolution reconstruction. The proposed method performs better in the absence of noise and in the presence of either Gaussian or Poisson noise.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Mehciz, Burton
- Description:
- The affordability, small size, and minimal power requirements of microelectromechanical systems (MEMS) accelerometers makes them common in portable applications. The challenge with MEMS accelerometers is that they are subject to stochastic and deterministic errors. This work provides an examination of the benefits and limitations of average measurement-based homogeneous MEMS accelerometer fusion techniques. A gradient descent technique and two linear least squares approximation methods are also developed for accelerometer calibration. A new calibration method, called the moments technique, achieves low computational requirement and demonstrates similar accuracy to the gradient descent calibration method when accelerometer measurement biases are small. Predictive models of compound accelerometer measurement improvement are also developed. They exhibit good agreement with experimentally established trends and support the idea that simple compound accelerometer systems can be used to improve the quality of acceleration measurements in certain applications. However, this study also reveals an increasing trend in the relative error between the predicted and observed compound measurement improvement when more accelerometers are combined.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Silva, William
- Description:
- Current Mode Logic buffers are based on the MOS differential amplifier circuit. Since CML buffers utilize a differential circuit topology, they are less vulnerable to common-mode noise than standard CMOS buffers. CML buffers are able to operate in higher frequency ranges than standard CMOS buffers, which makes them the optimum choice as output drivers for high-speed integrated circuits. The process of designing a tapered CML buffer chain is explored in this paper, including all important design issues. For this project, a tapered CML buffer chain was designed for a 6-bit interpolating flash analog-to-digital converter operating at 1 GHz.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Liao, Zhongchao
- Description:
- The conventional diffraction limit defines a finite range of spatial frequencies that can be transmitted through a microscope. To reveal more information about the objects that are observed by microscope, techniques that can go beyond this limit need to be developed. Structured illumination microscopy (SIM), one such method, uses patterns of excitation light to encode otherwise unobservable information into the observed image. Although the method has been well developed, the procedure of this technique is complicated. During the procedure, after encoding the unobservable information into the observed image, the superresolution information components need to be separated, shifted, and reassembled. These procedures have never been clearly explained. In this project, a computer algorithm of the linear structured illumination microscopy technique is developed. To implement this algorithm, multiple images of an object are taken with different phases and orientations of sinusoidally patterned illumination. Superresolution information components then can be extracted from these images. The procedures of separation, shifting, and reassembly of the superresolution information components are presented, explained, and verified. A block diagram of the whole procedure of the structured illumination method is presented. The results of the conventional microscope and the structured illumination algorithm are generated and compared. When applied to test objects, the performance of the algorithm is found to be in agreement with theoretical predictions, thus verifying the theory and the implementation algorithm. The block diagram of the whole procedure of the structured illumination and the explanation of the procedures of separation, shifting, and reassembly of the superresolution information components can be taken as the instructions of how to implement this method. This project report is intended to serve as a useful reference for researchers to understand this method.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Kravchuk, Lyubov and Pierce, Jazmin
- Description:
- During recent years, there has been a trend of increasing renewable generation desiring to interconnect to the electrical power grid around the world. While renewable interconnections, such as wind, solar, biomass or geothermal, are favorable for the environment because they decrease green house gas emissions, there is a need to perform studies to ensure that the reliability of the power systems is not decreased as a result of these new interconnections. This project will analyze the transient stability of a system that has different types of wind turbine generators that are commercially available today.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Bhalodia, Jay Maheshkumar
- Description:
- The goal of this project is to design and implement FPGA design of Daubechies lifting scheme for audio compression and reconstruction. The design approach is to use top down hierarchical design. The algorithm implemented performs compression and reconstruction for three levels. Daubechies scheme is recursive process in which output of previous level is input to another level and so forth. It was challenging to implement this project, as it needed lots of research effort for both software algorithm and hardware interface implementation. The completely audio compression system was implemented initially with Matlab and then with Verilog Hardware description language and emulated on Cyclone II FPGA of Altera DE2 kit. The algorithm generates satisfactory audio decompressed outputs.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering

- Creator:
- Gohil, Naikur Bharatkumar
- Description:
- In last couple of decades, the number of vehicles has increased drastically. With this increase, it is becoming difficult to keep track of each vehicle for purpose of law enforcement and traffic management. License Plate Recognition is used increasingly nowadays for automatic toll collection, maintaining traffic activities and law enforcement. Many techniques have been proposed for plate detection, each having its own advantages and disadvantages. The basic step in License Plate Detection is localization of number plate. The approach mentioned in this project is a histogram based approach. This approach has an advantage of being simple and thus faster. Initially, license plate localization is implemented using MATLAB and verified for its functionality. Once the functionality is verified, the algorithm is implemented on EVM320DM6437 Texas Instrument (TI) Digital Video Development Starter Kit (DVDSK). By implementing this algorithm on a kit, we eliminate need of computers and thus think of portable implementation of such application. In this approach, a digital camera is used to capture video and feed it to the kit. The kit processes each frame individually and provides the co-ordinates of location with maximum probability of having a number plate. Later, this information is used for recognizing actual number of the license plate. The plate detection algorithm is implemented in C, compiled and debugged using Code Composer Studio (CCS), an IDE provided by TI. Successful demonstrations were given by downloading the algorithm onto TI’s EVM320DM6437 DVDSK.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Electrical and Electronic Engineering