Search Constraints
Filtering by:
Campus
Sacramento
Remove constraint Campus: Sacramento
Department
Computer Science
Remove constraint Department: Computer Science
Number of results to display per page
Search Results
- Creator:
- Vadlamannati Lakshmi, Venkata Sai Raja Bharath
- Description:
- A graph database presents data and the relationship among data based on the graph model. Graph Databases uses graph structures for semantic queries with nodes, edges, and properties. A graph database has two defining elements: 1) Node, which represents an entity, and 2) Arc or a Relationship, which is the connection between two nodes. There are many benefits of using a graph database, such as performance and flexibility. The most widely used graph database is Neo4j, which is used by many organizations and companies around the world, such as Wal-Mart and Lufthansa. Although there are many advantages in graph database systems, it can be improvised with features that have been implemented in relational database systems. One of such improvisations is active rules. Relational database systems use active rules for constraint management, especially at complicated application level. This project incorporates active rules in a graph database, focusing on using active rules to specify business logic. Once a rule is defined, the database will react to the predefined event and execute the business logic as necessary. This project focused on the language model of the rule system. It also implemented a prototype for rule execution system.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Wani, Gaurav Dilip
- Description:
- In 21st century it has become a norm that companies are using virtual assistants such as Siri, Google assistant or Alexa to resolve many problems faced by customers. These technologies are empowered by using natural language processing and are good at understanding queries asked by humans. These assistants are examples of speech to text conversion models where human inputs are converted into texts and processed on cloud. On the other hand, text to speech conversion has attracted attention from researchers and practitioners where text needs to be normalized first and then provided as an input to the system. Companies have developed Text-to-speech synthesis (TTS) and automatic speech recognition (ASR) systems. The biggest challenge is to develop and test grammar for various rules. We are addressing that challenge and converting input tokens into meaningful words. The text normalization is a process in which we convert written text into human understandable language. The aim of this project is to design and implement DeepNarrator, a novel system that extends the existing TTS libraries in support of the conversion from alphanumeric text into meaningful words. In this study, we have used dataset provided by Google for text normalization challenge on Kaggle. This dataset has 16 classes and more than 9 million rows of data for training and testing purpose. DeepNarrator consists of four modules which includes vector extraction, token classification, text conversion and speech generation. In vector extraction, input tokens are being converted into 100 dimensional vectors. Multiple approaches were employed to classify input token into classes. The GRU-based approach provided 98.03% testing accuracy. While the LSTM-based approach yielded 98.14% accuracy for predicting correct results. Based on predicted class, input text is being converted into spoken form using regular expression. Audio files are being generated using Google text-to-speech API.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Shiroor, Shekhar Vikas
- Description:
- This project to implements a generalized neural network agent that plays different video games using reinforcement learning algorithm. This project uses OpenAIs simulated video game environment ‘gym’ for training and testing the proposed reinforcement learning algorithm solution. For training the neural network agent, different neural network models are used like Convolutional Neural Networks, Recurrent Neural Networks and combination of both. Python and TFlearn (Tensorflow backend) are used to implement the project. The results show that the proposed solution works well for an average of two to three games. However, performance of the solution is degraded when neural network is trained on four or more video games. Although using the ‘TopK’ metric (which is added to the proposed solution to increase the efficiency of neural network to play multiple video games) yields a dramatic increase in training and validation accuracy of the neural networks, the networks are still not able to play variety of video games with good degree of precision. To improve the performance, deeper neural network models like VGG-19 can be utilized in the future, given that hardware resources required by such models are available (e.g., a GPU with a larger global memory is needed).
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Muriki, Satya Tejaswi
- Description:
- Online reviews are highly influential, with consumers preferring the advice of other consumers rather than information provided through advertisements. Hence, review consistency and trustworthiness became an urgent need and a necessity for both individuals and businesses. The project aim is to provide a platform for online users to write, search, and view the reviews. This is done by developing a decentralized tool which prevents online reviews from being changed or modified after post submission. Another important feature of this tool is to address spam issues with the review filtration methods. A unique feature of the tool is the use of the merkle tree concept for secure verification of reviews. By adding metamask browser extension on user system, users are enabled to utilize the services without running full ethereum node. This tool is implemented using solidity and blockchain. The blockchain stores the user reviews which will make them immutable. Also, reviews are hashed, and they can be verified if anything is altered. The transparency of blockchain technology helps reduce suspicion, as its integrity is monitored and maintained by immutable blocks.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Poosarla, Akshay
- Description:
- Skeletal bone age assessment is a common clinical practice to analyze and assess the biological maturity of pediatric patients. This process generally involves taking X-ray of the left hand, along with fingers and wrist, then followed by image analysis. The current process involves manually comparing the radiological scan of this X-ray with the standard reference images and estimating the skeletal age. The analysis is crucial in determining if a child is prone to some disease. This current manual process is very time consuming and has high probabilities of misjudgment in predicting the skeletal age. However, recent developments in the field of neural networks provide an opportunity to automate this process. In this project, we are using convolutional neural network methods and image processing techniques to fully automate the process of predicting the skeletal bone age of a patient from the given X-ray images. Radiological Society of North America has collected a dataset with 12600 different hand images of boys and girls from Colorado children’s hospital and Stanford children’s hospital and provided for research purposes. This dataset is used to train and build a convolutional neural network model in this study. Each image in the dataset is a complete image of a left-hand wrist and a CSV file containing the corresponding age in months and gender. The purpose of this project is to automate the current manual process and develop a tool that helps doctors and act as a decision support system in predicting the skeletal age. Along with this, we are developing a user-friendly and highly available online system which helps the doctors in predicting the bone age accurately.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Rawlani, Bhagyashree
- Description:
- Convolutional neural networks are generally assumed to be the best neural networks for classifying images. In November 2017, Geoffrey Hinton et al. introduced another neural network approach known as Capsule Network. He and his team have claimed that capsule networks are better than convolutional neural network, based on tests they performed on datasets such as MNIST and CIFAR-10. The aim of this project is to test capsule networks on distracted driver dataset and at the same time test the same dataset on convolutional network. This would help in comparing the performance of both types of neural networks. Geoffrey Hinton et al. believed that capsule networks can perform better than convolutional neural network when it comes to number of training images required to generalize the model. They also claimed that capsule networks have a better capability of classifying images keeping in mind the hierarchy between features of an image. This project would help in testing those claims. The goal of this project was also to create a network which would detect the distracted drivers which will help in reduction of accidents. After conducting experiments on both the networks, Capsule Network did train the model efficiently with a small number of training images. But, the accuracy was insufficient to say that we have a system in place to detect the distracted driver. When convolutional network was trained with a larger number of images, they gave a better accuracy than capsule networks. In addition, they were much faster than the capsule networks with respect to computation time.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science
- Creator:
- Natesh, Adarsh
- Description:
- In this work, we present a novel visual analytical tool, intended to improve the Facebook experience which produces an interactive visual analytic presentation of a user’s newsfeed to help understand bias trends existing among the user’s social group. The visual analytic tool provides steps to compare posts from different political spectrums and to validate the sources of the post in the Newsfeed. The tool also presents the propagation speed of the information seen. The spread of false information is prevalent on many different platforms of social media. Fake news and other invalid content are published for a variety of reasons like trolling, clickbait or smear an adversary. In this work, we focus on the Facebook platform. Facebook has a false information problem impacting its users. Depending on the user's social group the user may see a specific post based on the mutual interests of the group. Lots of news articles from different sources might be propagated, liked, and shared by anyone based on a sensationalized title. To develop our visual analytic tool to visualize the validity of posts based on the source from which the content was generated we collect and aggregate the Facebook feed data. We input the feed data to the tool provided by Graph API. The feed data is organized considering ID, content, number of shares, time of posting, associated media files and the source. The processed information is filtered to their respective categories and fields. Next, we statistically analyze the Facebook feed and input this data into a novel visualization which summarizes the different political spectrums seen in the posts. We designed and developed this visualization, using JavaScript, D3.js, Python and Graph API. We present a summary view which aggregates all the data, and the ability to zoom in to understand a specific post for its “genuine” factor. Data from every post is analyzed and classified into appropriate categories like Family, Health, Education, Entertainment, News, and, Digital. In this project, the emphasis is made on feeds related to Politics showcasing the five key factors on the post and finally representing the amount of truth in the post through visualization. We consider minute details in the post and apply suitable algorithms to visualize the data. Also, we consider the feedback from the user to provide more control and help the user be proactive about the information in their newsfeed.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science

- Creator:
- Clarke-Lauer, Matthew
- Description:
- The establishment of a secure channel of communication between two parties is a primary goal of modern cryptography. In an ideal world, the secure channel is a dedicated, untappable, and impenetrable method of transmitting data between a sender and receiver. An adversary would be unable to see or modify any information crossing the channel. Unfortunately, that is not possible in the real world. Most communication is done using channels that are susceptible to eavesdropping and interference. In order to overcome these problem cryptography attempts to recreate the secure channel in order to provide confidentiality and authenticity to all communications between two parties. Pseudorandom functions provide an ideal starting point for designing cryptographic primitives that met these goals. Work done by Philip Rogaway and Mihir Bellare has shown the usefulness of a random oracle as a starting point for developing asymmetric-key cryptographic primitives. This paper expands on that concept by providing algorithms that meet the goals of confidentiality and authenticity by using a conceptual pseudorandom function to construct a set of symmetric-key cryptographic primitives. Due to it's flexibility and simplicity, a conceptual pseudorandom function is used to construct a symmetric-key cipher and a message authentication code that are both simple and efficient, while meeting the requirements of a confidentiality and authenticity. Based on those two primitives, an authenticated encryption scheme is built in order to provide the guarantees of the secure channel. This paper provides the design and implementation for the SCU-PRF, a pseudorandom function created by combining the Salsa20 stream cipher and the VHASH universal hash function. The SCU-PRF is designed with the goal of efficiency, requiring little more computation than the base components of Salsa20 and VHASH. Using the algorithms built with the conceptual PRF, a complete implementation of the protocols is created using SCU-PRF. When tested SCU-PRF proved to be an efficient and flexible pseudorandom function with a performance profile ideal for constructing a high efficiency secure channel.
- Resource Type:
- Thesis
- Campus Tesim:
- Sacramento
- Department:
- Computer Science

- Creator:
- Minton, Suzanne Louise
- Description:
- Statement of Problem The problem of missing data in statistical analysis is one that the field of social research has failed to adequately address despite its potential to significantly affect results and subsequent substantive conclusions. The purpose of this study is to evaluate the practical application of missing data techniques in reaching substantive sociological conclusions on the basis of statistical analyses with incomplete data sets. This study compares three different methods for handling incomplete data: multiple imputation, direct maximum likelihood, and listwise deletion. Sources of Data The comparisons are conducted via a reexamination of a multiple regression analysis of the ECLS-K 1998-99 data set by Downey and Pribesh (2004), who reported the results of their study on the effects of teacher and student race on teachers’ evaluations of students’ classroom behavior using multiple imputation to handle missing data. vi Conclusions Reached After comparing the three different methods for handling incomplete data, this study comes to the general conclusion that multiple imputation and direct maximum likelihood will produce equivalent results and arrive at the same substantive sociological conclusions. The current study also found that direct maximum likelihood shared more similarities with listwise deletion than with multiple imputation, which may be the result of differences in data handling by this author and Downey and Pribesh. In general, both direct maximum likelihood and listwise deletion produced increased significance levels and therefore a greater number of statistically significant variables when compared to the multiple imputation results. Still, all three methods produced basically equivalent results. The importance of taking method choice and missing data into careful consideration prior to performing a statistical analysis and drawing subsequent substantive conclusions is also stressed.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science

- Creator:
- Onitsuka, Lynne Midori
- Description:
- The purpose of this project is to develop a data analysis tool for assessment. There are three tiers of users: public, data entry, and faculty. The public can view non-sensitive assessment information. In addition to accessing public information, the data entry tier has the ability to insert, delete, or modify information. This includes preparing rubric forms and entering the raw data associated with those forms. The faculty tier can access not only public information but also any sensitive information. This includes interviews, evaluations, surveys, industrial visits, and other forms of raw data. In addition, the faculty will have access to a menu of available analytical methods. These methods will be used to analyze the data resulting from implementation of a specified rubric and provide associated reports. This project is designed as a web-based application. This application provides an efficient and flexible environment for entering data and utilizing the data analysis tool. In addition, it will provide a means to effectively reach a large audience. This project uses a three-tier web application architecture. The presentation layer is the user interface available via the internet. The business layer is the PHP processing that is done in the background. This is the layer that sends and receives requests between the presentation layer and the third layer, the data layer. The data layer is a MySQL database. Overall, it utilizes a LAMP (Linux, Apache, MySQL, PHP) open source web platform. As a measure of security, only the owner and web daemon have access to the files that comprise this project.
- Resource Type:
- Project
- Campus Tesim:
- Sacramento
- Department:
- Computer Science