We utilize the setting of Universal Algebra to introduce a new class of objects called logics, with the aim of generalizing the structure of familiar binary logic to a family of finite and countably infinite multi-valued logics. In Section 2, we explore several concepts and results parallel to those of more familiar algebraic structures and provide as an example an independent proof of the first isomorphism theorem for logics. In Section 3, we review some basic notions from Category Theory, which give us another lens through which to view logics, then prove several categorical results about them. In Section 4 we discuss implications of the idea of logics for various areas of mathematics, for the sake of brevity providing only a skeletal outline of what these might be, and in the last section we discuss the implications of logics on formal languages and natural deduction, providing a framework through which one might create generalized propositional logic.
This study examined the behavior of a bullying epidemic on different social network structures using ideas from network theory, graph theory, and stochastic epidemic modeling. Three aspects of the bullying epidemic were investigated: the predictors for duration of the epidemic, the impact of different initial conditions on the epidemic, and the impact of different network structures on the epidemic. Overall, the more connected the network and the stronger the connection between each individual, the longer the bullying epidemic last. Introducing a more popular student as the first bully to the population would also lead to a longer duration for the bullying epidemic. These results suggest that teachers could educate students on the negative consequences of bullying, which could weaken the connections between susceptible students and the bully, and thus decrease the impact of the bullying epidemic in the classroom.
In this paper, we describe a method for computing the dimension of linear systems on a graph, which are related to linear systems of divisors on tropical curves. Tropical geometry is a discrete version of algebraic geometry, where a tropical curve can be represented by a metric graph. By reducing an algebraic curve to a graph, computing the dimension of a linear system can be thought of as a geometric problem. Specifically, we can compute the dimension of the linear system as the distance to a surface in n-dimensional space using the taxi-cab metric. Finally, we present examples of computing such dimensions of linear systems for divisors on 2-vertex and 3-vertex graphs.
This paper details the design and implementation of my web-application, “Globehub”. The application was made with the intention of being a comprehensive, visual aid for delivering diverse news to users. The application was developed using JavaScript, HTML, and CSS for the frontend and UI, and Node.js and Mongodb for the backend, in addition to a variety of open-source software and tools for other non-programming related tasks. It is my hope that this paper will serve as a reference on how to approach the task of creating your own application, web-based or not.
Persistent data structures allow large and complex data structures to be copied and manipulated inexpensively. The persistent way of representing data offers opportunities to more elegantly and more efficiently implement certain algorithms and programming patterns. Few persistent data structure libraries, however, are designed with an emphasis on speed and performance compared to their mutable cousins. We describe and present a C library for a persistent graph data structure, which uses array compression techniques and balanced wide-fanout tries to create a structure that enables persistence without sacrificing performance. Compared to a competitive C++ mutable graph library, we consistently achieve 30-40% slower random read performance using up to 30% fewer bytes in memory, with the benefit of highly space-efficient persistence.
Topological Data Analysis (TDA) is a growing field in applied mathematics. TDA encompasses a wide variety of topological methods which can be applied to the problem of analyzing large or noisy point-cloud data sets. One of the most important of these methods is persistent homology, which characterizes the shape of a data set by finding holes of various dimensions in the data. This paper covers the algebraic construction of simplicial homology groups, gives efficient methods for computing them, and provides some intuition into what these groups mean. It then exnteds these ideas to a data analysis setting by discussing the algebraic theory behind persistent homology and also giving some examples computed in the R package TDA, which efficiently computes the persistent homology of data.
This thesis discusses the context and an overview of modern public key cryptography. RSA encryption, the most common modern cryptosystem, relies on the difficulty of the factoring problem for security. In essence, given two large privately known prime numbers, the encryptor makes public the product of these two primes, and in attempting to crack RSA, one must determine the unique factorization of this product. Examined here are the most commonly used factorization algorithms and an in-depth look at their time complexities, with a focus on the quadratic sieve method. Research was oriented at analyzing these algorithms’ efficiencies to determine under which conditions each algorithm performs best. Further work has been done in java implementing these algorithms.
This paper will undertake an investigation into attitudes towards mathematics. Data from $27$ Denver Public High Schools reports that all students in the DPS high schools score on average 15 higher on reading/writing standardized tests than on math. The study speculates that this stems from attitudinal differences towards the subjects that students have. This study proposes a mathematical model that hypothesizes the school's environmental make-up, students home environment, and students performance on standardized tests affect students attitudes toward mathematics. The model will be implemented in the software package SMART-PLS that uses a partial least squares regression algorithm to estimate the latent variable "Attitudes Towards Mathematics" and investigate what factors affect math education. The study shows that Attitudes Towards Math affect performance on Standardized Testing and Home Environment affects students Attitudes Towards Math and Standardized Testing Performance.
This paper examines the propagation of rumors over various social network structures. We create two modified compartmental SIR models by bridging concepts from graph theory, network theory, and epidemic modeling. We incorporate these models in three complex networks: Complete Graph, Small World, and Preferential Attachment. Then, we conduct numerical simulations through modeling software to study the speed, intensity, duration, and extent of the spreading rumor through each structure. Finally, we compare the results between the simulations and pinpoint the underlying characteristics of each network. Our results show that large centralized hubs are more effective in rumor spreading than small, dispersed, but highly connected communities. We also find that by increasing the infection rate or creating more connections within a network both lead to a faster spreading and overall larger final rumor size.
We investigate thread-level concurrency in several common desktop applications. We find that the majority of active periods (periods of uninterrupted CPU activity for a single thread) are relatively short, while the few long active periods account for most of the active time. The shortest 90% of the active periods only account for roughly 12.75% of total active time. We speculate that this is generally true for most applications, and that there might be some way to take advantage of this fact in the scheduler. Due to the difficulties in catching, testing, and fixing concurrency bugs, we propose modifying the thread scheduler to reduce the risk of concurrency bugs where possible. Our simulations show that our modification may work well for certain applications depending on the level of CPU demand.
In this expository paper, I introduce fractals with ways we can identify them such as self-similarity and non-integer dimensions. I build the concepts necessary for the proof of the Collage Theorem, including basic topological concepts on metric spaces, closed sets, and the convergence of sequences. With these definitions, we can establish what it means for a metric space to be complete. Once we have a complete metric space, we can prove the contraction mapping theorem and use a similar technique to prove The Collage Theorem. The Collage Theorem can then help us construct a method for programming a computer to generate the fractals. I discuss two types of algorithms to plot the points of an attractor that will resemble a fractal-like image. Finally, I implement these algorithms in Python to make the fractals on my own computer.
The theory of fractal geometry is a relatively new concept in Mathematics. However, it is a concept humans are very familiar with. Most commonly we think about fractals as they relate to objects found in nature. For example fractal analysis can be used to define human structures like cities. We will focus on one way of defining and analyzing cities, by calculating their fractal dimension. This thesis will connect fractal dimension as it relates to city analysis via area and length. We will draw some conclusions regarding social implications of city life based on the theory of connectivity, fractal theory, and the parallel between connectedness and fractility. These types of questions will be open ended and allow for inquiries on further research into the science of cities.
The theory of fractal geometry is a relatively new concept in Mathematics. However, it is a concept humans are very familiar with. Most commonly we think about fractals as they relate to objects found in nature. For example fractal analysis can be used to define human structures like cities. We will focus on one way of defining and analyzing cities, by calculating their fractal dimension. This thesis will connect fractal dimension as it relates to city analysis via area and length. We will draw some conclusions regarding social implications of city life based on the theory of connectivity, fractal theory, and the parallel between connectedness and fractility. These types of questions will be open ended and allow for inquiries on further research into the science of cities.
Researchers are using deep neural networks and reinforcement learning to train agents to perform complex tasks at unprecedented levels, even better than humans in many cases. However, there are still tasks that are quite time consuming for researchers to pursue with the current methods. Thus, we explore a method to augment deep reinforcement learning that has the potential to speed up learning time and improve overall performance, transfer learning. Specifically, we use a human guided transfer learning, where the source tasks are programmed by a human. We find that transfer learning provides a substantial learning boost to the agent in the task we have chosen, and provide evidence that the potential benefits of transfer learning, particularly human guided transfer learning, may be worth the costs.