14th IEEE Integrated STEM Education Conference — 9 AM - 5 PM EDT, Saturday, March 9

Onsite Venue - McDonnell and Jadwin Halls, Princeton University, NJ - Virtual Attendees - Enter Zoom Room

Session Introduction

Introduction

Conference
9:00 AM — 9:30 AM EST
Local
Mar 9 Sat, 9:00 AM — 9:30 AM EST

Enter Zoom
Session Keynote-1

Keynote Speaker 1:

Conference
9:30 AM — 9:50 AM EST
Local
Mar 9 Sat, 9:30 AM — 9:50 AM EST

Curating Engaging Experiences to Foster Student STEM Identity

Melissa Thompson (Albert Einstein Distinguished Educator Fellow for the Department of Defense)

1
This talk does not have an abstract.
Speaker
Speaker biography is not available.

Enter Zoom
Session Keynote-2

Keynote Speaker 2:

Conference
9:50 AM — 10:10 AM EST
Local
Mar 9 Sat, 9:50 AM — 10:10 AM EST

The Uayki Connectivity System: Closing STEM Education Gaps in Remote, Underserved Communities

Karim Rifai Burneo (founder of the Uayki Foundation and Uayki Technologies)

1
This talk does not have an abstract.
Speaker
Speaker biography is not available.

Enter Zoom
Session Keynote-3

Keynote Speaker 3:

Conference
10:10 AM — 10:30 AM EST
Local
Mar 9 Sat, 10:10 AM — 10:30 AM EST

Promoting STEM for humanity - Recognising and utilising opportunities at IEEE HTB

Lwanga Herbert (IEEE HTB)

1
This talk does not have an abstract.
Speaker
Speaker biography is not available.

Enter Zoom
Session Full-01

Full Paper Track 01 — Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Designing Educational Games to Teach Ethical Hacking Course in High School (Grades 9-12)

Shafi Parvez Mohammed, Gahangir Hossain and Syed Yaseen Quadri Ameen (University of North Texas, USA); Steven Keosouvanh (West Texas A and M University, USA); Mikyung Shin (West Texas A&M University, USA)

0
In today's fast-paced digital era, teenagers are deeply engaged in the online world, making them susceptible to potential cyber risks. While the internet offers numerous advantages, it also presents cybersecurity challenges. This paper introduces a unique three-stage ethics model for high school students, categorizing hacking types. By blending standard hacking techniques with interactive, hands-on learning using the "Password Predictor Game," students can effectively learn to safeguard their online presence. The "Password Predictor Game" has been found to positively impact students' cybersecurity knowledge and behaviors, as evidenced by both quantitative data and qualitative feedback. The game's instant feedback feature proves particularly effective, leading to a reduction in students using identical passwords. However, future versions of the game could benefit from incorporating more complex scenarios and a broader spectrum of cybersecurity topics, as suggested by some focus group participants. Moreover, our study underscores the significance of early cybersecurity education and emphasizes its practical relevance in the real world.
Speaker
Speaker biography is not available.

The University Environment as a Catalyst for Marriage: A Data Analysis Study

Brendon Munashe Mahere and Malak Fadili (Al Akhawayn University in Ifrane, Morocco); Imane Fakir and Yousra Chtouki (Al Akhawayn University, Morocco); Fatima Zahra Belhadj, FZ (Al Akhawayn University in Ifrane, Morocco)

1
The social aspect of student life in higher education is an important indicator that may to a certain extent affect other variables such as student academic performance, student choice of major/minor, the student's choice of the academic institution, the dropout rate, the transfer rate (transfer to another institution)..etc. Data analysis is a powerful way to learn more about the social component of student lives and help identify patterns. Python data analysis libraries allow an easy visualization of these patterns and help spot trends that may not be predictable. Such discovered information retrieved from data allows institutions to make more informed decisions and better cater to students' needs. University marriage specifically receives many conflicting opinions, and it bears many questions, including positive and negative real experiences. Many male and female students globally choose to assume this responsibility and combine marriage and study. Nonetheless, some of them feel that they cannot balance marriage and academic life. Some feel that school years are the most suitable time to find a partner while others believe differently. Are university born marriages more common than we may think? Can an academic institution be a catalyst for marriage? In this paper, we use python libraries to analyse data for a case study within the context of Al Akhawayn University in Ifrane (AUI). The goal is to show how some of the features of python can be used to expose hidden aspects of a given topic of study, in this case the student. We conducted a quantitative and qualitative research methods on the account of marriage between AUI students. Our results reveal that 33.6% of the alumni respondents married a former school mate from AUI and 32.1% of currently enrolled students reported that they have already met a future partner.
Speaker
Speaker biography is not available.

Enter Zoom
Session Full-02

Full Paper Track 02 — Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Promotion of Bolivian higher education in science and engineering by STEM engagement activities through the participation in NASA HERC

Fabio Diaz Palacios, Alejandro Nuñez Arroyo, Karen Vidaurre Torrez, Osvaldo Quinteros Terrazas, Marcelo Velasquez Enriquez and Mariana Molina Montes (Universidad Catolica Boliviana & CIDIMEC, Bolivia)

2
STEM engagement goal is to drive a systematic approach to influence students across a spectrum of learning. In a developing country, like Bolivia, the inclusion of STEM education for the educational system can derive in the improvement of its outcomes at different levels. Information acquisition, intrinsic interest as well as further research and technology development could be achieved by the implementation of new approaches focused on stimulating the thirst for knowledge in young people. This paper helps to understand the actual educational system of Bolivia and how the participation into international competencies such as NASA Human Exploration Rover Challenge through engagement activities can bring a positive impact in order to promote higher education related to the fields of space, science and engineering. And how new careers such as Mechatronics Engineering helps to this new educational process showing the activities, challenges and methods made during this path. Index terms - Aerospace, Bolivia, Education, HERC, Mechatronics, NASA, STEM, engagement.
Speaker
Speaker biography is not available.

Impact of Generative AI Adoption in Academia and how it influences Ethics, Cognitive Thinking and AI Singularity

Daniel Marimekala (Irving A. Robbins Middle School, USA); John Lamb (Pace University, USA)

2
Our Gen Alpha, born between year 2010 and 2024 see and experience more of AI and they adapt quickly to AI and will have less impact on advancements in Technology when compared to Generation X (1965-1980); Generation Y (Millennials) born from 1980 to 1994; Generation Z from 1995 to 2009. The reason is that Gen Alpha constantly uses electronic gadgets either in gaming, learning or for social media. They are susceptible quickly to Generative AI when compared to Gen X, Gen Y or Gen Z. Especially in academia, where Gen Alpha is more encouraged to generative AI such as ChatGPT in homework, assignments, and research. Well, this is a good approach for those who are struggling to complete their work or for those who do not have a clue how to complete their homework, assignment, or research. On other hand, the Generative AI tools such as ChatGPT are slowly becoming a part of the system that students lean on the Generative AI tool instead of doing research or thinking critically. As a result, there will be some behavioral changes that will be developed over a period. These behavioral changes are impatient for answers to the problems, express panic while solving a problem, shows anxiety, low self-esteem in solving problem, and low confidence factor. There is always a positive side of Generative AI, if we look at it from a different angle. Some of them are, it helps an individual and guides them with possible answers, all individuals can ask questions using prompt engineering and get responses. But the fact of the matter is how authentic the response from Generative AI is a question? Can the author quote the responses he/she received from ChatGPT and How can we avoid plagiarism? How can we reduce bias? How can we avoid AI singularity?
Speaker
Speaker biography is not available.

Enter Zoom
Session Full-03

Full Paper Track 03 — Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Game-based Assessment for Computational Thinking: A Systematic Review

Qi Luo (Macao Polytechnic University, Macao & Heyuan Polytechnic, China); Shuhan Zhang (Macao Polytechnic University, Macao)

1
Computational thinking (CT) has become an indispensable skill in the 21st century and has permeated into K-12 education at all levels. The evaluation of CT has become a crucial concern for educators and researchers. Currently, while extensive research has investigated the assessment of students' CT conceptual understanding, the measurement of their thinking processes is less involved. The game-based assessment appears to address this issue by analyzing students' behaviors and reactions during the gameplay process, which has become an emerging assessment tool in CT education. This drives the need for a comprehensive overview of this field. To fill this gap, this study aims to conduct a systematic literature review on the assessment of CT through games. It explores CT skills assessed through games, focusing on target groups, game forms, and benefits. A total of fifteen studies were examined. Findings indicate that algorithmic thinking, problem decomposition, pattern recognition, and abstraction were commonly assessed in games. Different ways for assessing CT skills in games were identified, involving progression level, log data, and manual coding. The benefits of CT game-based assessments were broadly reported, such as enjoyable experiences for students, instant feedback, easy administration for teachers and researchers, and richer information on students' cognitive processes. However, limited has been done in exploring the use of digital games for younger age groups, and the psychometric qualities of the tools were hardly reported in the reviewed studies. The study sheds light on the research and teaching practices in using games and interactive assessment platforms in CT education.
Speaker
Speaker biography is not available.

MBTI Prediction Study Using Word2Vec

Isabelle Lee (USA)

0
As internet usage increases, the amount of data, such as text and images, being created is growing dramatically. Nowadays, we live our lives by sharing and obtaining information through various online platforms. Since individuals primarily share their lives and information through text, text not only contains information but also conveys emotions and psychological states. Additionally, personality type analyses like MBTI are used to understand an individual's psychological state. However, MBTI requires a significant number of surveys and is challenging to capture rapidly changing emotions. This study utilizes Twitter-based data to train an MBTI prediction model and analyze the personalities revealed in the results through short texts. Furthermore, by analyzing written content, it can determine current emotional states and identify words and sentence patterns predominantly used by each MBTI type. Through this research, rapid psychological analysis can be conducted using short texts, and user patterns can be predicted. This study can check while exchanging conversations or texts, which can reduce the cost of psychological and personality tests.
Speaker
Speaker biography is not available.

Enter Zoom
Session Full-04

Full Paper Track 04 — Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

A-level Mathematics Curriculum Design for Cultivating Top Innovators

Dingyuan Xu (Beijing RCF Experimental School, China); Xiang Gong (Princeton International School of Mathematics and Science, USA); Min Wang (Beijing Chaoyang RCF Dongba School, China)

2
The cultivation of top innovators through curriculum reform in high schools is a topic of great interest. The A-Level Center of Beijing RCF Experimental School has embarked on an innovative approach by establishing experimental groups and integrating multiple A-level courses. This article focuses on curriculum integration and presents the concepts, principles, steps, and insights regarding the curriculum design of A-level courses for cultivating top innovators. The steps involved include 1) Establishing curriculum integration goals based on the requirements of A-level courses and the mathematics entrance examinations of Oxford University and the University of Cambridge, 2) Investigating student profile, including academic performance, motivation, learning needs and preferences, 3) Systematically organizing course content by creating three "arteries" (pure math, mechanics, and statistics) that connect nine main "hubs" (nine A-Level courses) and 62 points of interest (62 important topics), and 4) Implementing classroom activities such as presentations and discussions to enhance the understanding. Additionally, this article proposes future development directions, including further exploration of the value of curriculum in fostering individuals, research on the integration of various curriculum forms, promoting collaboration with the school counselor's office, and conducting systematic investigations into the mechanisms of cultivating top innovators.
Speaker
Speaker biography is not available.

Technological Tools as a Motivational Strategy in English Language Learning among University

Cristina Paez Quinde (Pontificia Universidad Catolica del Ecuador Sede Ambato & Instituto Superior Tecnológico España, Ecuador); Estefania Ojeda (Pontificia Universidad Católica del Ecuador Sede Ambato, Ecuador)

0
This article investigates the impact of technological tools as a motivational strategy in the teaching and learning process of the English language among university students. In a globalized context, the importance of English language proficiency has grown, emphasizing the need to keep students interested and engaged in their learning. In response to this situation, educational technologies emerge as a potential solution. It is highlighted how these tools provide an interactive and engaging way of learning, incorporating elements of gamification, personalized and immediate feedback, and flexibility to access resources. These features drive students' intrinsic motivation, granting them greater control over their learning process. Furthermore, the article explores how technologies can enhance grammatical accuracy, pronunciation, and encourage practice in authentic situations through exposure to real English media. However, it emphasizes that the adoption of these tools should be balanced and ethical, complementing traditional instruction rather than replacing it. The article also considers challenges and limitations related to the use of technologies in English language learning, such as technological access disparities and potential reduction in social interaction. The research underscores the importance of careful planning and appropriate selection of tools that align with pedagogical goals and student needs.
Speaker
Speaker biography is not available.

Technological Resources as a Strategy in Student Learning at the Polytechnic Training Center

Cristina Paez Quinde (Pontificia Universidad Catolica del Ecuador Sede Ambato & Instituto Superior Tecnológico España, Ecuador); Alvaro Monar (Pontificia Universidad Católica del Ecuador, Ecuador)

0
This scientific article focuses on the identification of technological resources used as strategies in student learning at the Polytechnic Training Center. The objective was to investigate how these technological resources can contribute to the teaching and learning process effectively. Through a mixed data collection approach that included student surveys and teacher interviews, various technological tools and applications used in the educational environment were analyzed. The results revealed significant benefits of integrating these resources, such as increased interactivity, personalized learning, access to updated information, and a more practical and relevant approach for students. While challenges were identified, such as the digital divide and the need for teacher training, the results supported the idea that technological resources can be powerful strategies to enhance student learning at the Polytechnic Training Center. In conclusion, this study provides a clear and evidence-based insight into the importance of technological resources as strategies in student learning. These findings support the need to continue exploring and implementing suitable technological resources in educational environments to promote effective and enriching learning.
Speaker
Speaker biography is not available.

Using the Swin-Transformer for Real & Fake Data Recognition in PC-Model

Jiyoon Park (Branksome Hall Asia, Korea (South))

0
Recently, due to the rapid development of generative AI technologies, the use of AI-generated images has increased significantly, making the distinction between real and fake images crucial. Generative images may be used in various ways such as data training and fast image generation, but a potential for misuse, such as in Deep fake or spreading false information, still exists. This study explores a novel model using the architecture of Swin-Transformer to distinguish between fake and real images generated based on CNN (Convolutional Neural Networks) and GAN (Generative Adversarial Networks). The Swin-Transformer, a successor model of Vision in Transformer (ViT), applies the structure of the Transformer, which has shown outstanding performance in natural language processing, to the field of images and demonstrates excellent pixel-level segmentation performance. Real and fake images require detailed pixel-level analysis, in which the Swin-Transformer exhibits higher accuracy. Improving the performance of distinguishing between real and fake images is expected to set limits on indiscreet image generation, bringing further effects such as preventing the indiscriminate use of AI images through program-based discrimination/legal sanctions.
Speaker
Speaker biography is not available.

Enter Zoom
Session Full-05

Full Paper Track 05 — Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Bridging IoT Education Through Activities: A Game-Oriented Approach with Real-time Data Visualization

Nurzaman Ahmed (Donald Danforth Plant Science Center, USA); Flavio Esposito (Saint Louis University, USA); Nadia Shakoor (Donald Danforth Plant Science Center, USA)

0
The rapid evolution of the Internet of Things (IoT) has underscored the importance of comprehensive educational strategies to impart IoT concepts and applications to a diverse audience. Given IoT's pervasive impact, there is hence a pressing need for effective education in this area. Currently, there is a significant gap between existing educational strategies for IoT and the dynamic, engaging approaches needed to captivate a diverse audience, particularly young learners. The challenge lies in developing a methodology that not only educates but also motivates students e.g., from Grade 2 to Grade 12. To address this need, we developed an innovative, activity-based educational framework, integrating interactive and immersive learning methods, aimed at simplifying complex IoT concepts with smart agricultural application in mind for early learners. We outline this novel pedagogical approach, detailing how specific IoT components are taught through targeted activities. The paper should serve as a guide for educators to implement this framework and encourage readers to recognize the importance of adopting new teaching strategies for IoT. Through the implementation of this framework, exemplified in a case study of a plant care game, we have observed an increased engagement and understanding of IoT concepts among our target students. These findings indicate the effectiveness of our approach in real-world educational settings.
Speaker
Speaker biography is not available.

Variational Autoencoders using Convolutional neural network for highly advanced cyber threats

Anita priyadarshini Durai pandian (Aarhus University, Denmark)

4
In the ever-evolving landscape of cybersecurity, the detection of highly advanced cyber threats demands innovative approaches. This abstract introduces a novel framework that harnesses the power of Variational Autoencoders (VAEs) and Convolutional Neural Networks (CNNs) for advanced cyber threat detection. The proposed system is designed to learn, model, and uncover subtle anomalies within network traffic data, enabling the identification of complex and sophisticated cyberattacks. In this framework, a VAE is trained to learn a probabilistic latent representation of network data, allowing it to capture essential features and structures inherent in both normal and malicious traffic. The VAE serves as the foundation for a subsequent CNN-based anomaly detection model, which utilizes the latent representations to identify patterns, outliers, and cyber threats. By integrating these two neural network architectures, the system leverages the VAE's data compression and representation learning capabilities, while the CNN excels in pattern recognition and anomaly detection. The deployment of this advanced system within network infrastructure empowers organizations to continuously monitor and adapt to emerging threats. Its performance is evaluated using established metrics, ensuring the detection of cyber threats with high precision and recall. Additionally, the model can be integrated with external threat intelligence sources to enhance its threat detection capabilities. The VAE-CNN framework presents a promising approach for addressing the ever-increasing complexity of cyber threats, providing an adaptable, intelligent, and data-driven solution for the protection of critical digital assets and network infrastructure. Its ability to uncover highly advanced cyber threats has the potential to significantly enhance the security posture of organizations in the face of evolving cybersecurity challenges.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-01

Poster 01 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Detecting Food Allergies through Scratch Testing and Blood Tests

Shreya Dutt (MCMSNJ, USA)

4
Food allergies are on the rise and are becoming increasingly more prevalent. By 2019, about 6 million children in the U.S. had been reported to have at least one food allergy which amounts to 2 kids in every classroom. The cause of food allergies has long been discussed, leading to recent research determining that though genetics could be a contributing factor to food allergies, about two thirds of children with a food allergy do not have a parent with one. Because food allergies have greatly increased in the last generation, it has led researchers to believe that perhaps there is a correlation between food allergies and climate change that has also occurred in the past few decades. In order to determine the cause of food allergies and find a way to treat or prevent it, it first has to be diagnosed. There are two main ways to detect food allergies: Scratch testing and Allergen-Specific Immunoglobulin E (IgE) blood testing. A scratch test is a simple skin test where drops of specific allergens are placed on the skin and the skin is lightly scratched to expose the person to the said allergen. The person allergic to these will develop a bump at the skin site within 20 minutes. The bumps are traced and visually compared to determine the level of allergy. The Immunoglobulin E (IgE) blood test measures the level of IgE that is associated with allergic reactions in the blood. This helps detect an allergy to a particular allergen that is being tested for. When someone is exposed to an allergen, such as peanuts, dairy, or treenuts, etc., the person's body may perceive this as an antigen and produce a particular IgE that binds to specialized mast cells in the person's basophils, skin, GI tract, or respiratory system. The next time the body is exposed to the particular allergen, the IgE antibodies trigger the mast cells to release histamine, which cause allergic reactions or anaphylaxis. Specific IgE level test can measure the level of response to different allergens in a person. Though there is room for improvement in accuracy, both forms of testing used together are effective in diagnosing allergies and determining levels of allergies for the top allergens and many more. These methods of testing can assist in facilitating more research in food allergy causes and treatment, creating a brighter future for next generations of children.
Speaker
Speaker biography is not available.

The Feasibility of Coffee Grounds and Coconut-based Antimicrobial Exfoliant

Diego Lorenzo C. Donato (Philippines); Juris Roi D. Orpilla, Adrian Gabriel H. De Guzman and Matthew Jonathan T. Hallasgo (La Salle Green Hills, Philippines)

0
From the first quarter of 2022 to 2023, both the Philippine coconut and coffee bean industry have seen an increase in production at 1.88% and 1.3% respectively, due to the rapid modernization of the agricultural sector in our country; however with this growth led an increase in agricultural byproduct production which have caused negative adverse effects towards human, the environment, and socio-economic progress of the Philippines. In addressing the issue, the research paper studies the feasibility of producing an antimicrobial exfoliant out of agricultural byproducts, namely: Coconut Husk, Coconut Shell, Coconut Oil, and Coffee Grounds with Xanthan Gum as an additive. Given the importance of observing the Sustainable Development Goals (SDG) of the United Nations, our product will ensure its production process and benefits align with said goals specifically, 3, 6, 8, 11, 12, 13, 14 and 17. First, the production process is environmentally friendly, which makes it safe for the bodies of water and the life below it. Additionally, the benefits include the improved well-being and health of skin; the optimised use of agricultural wastes in the country; the usage of biodegradable materials for the product; decreased amounts of agricultural byproduct in landfills; improved air quality, and the improved economy for the agricultural sector in developing countries such as the Philippines.For this study, the independent variable will be the Coffee Grounds and Coconut-Based Antimicrobial Exfoliant while the dependent variable will be regarding its effectivity and feasibility which will be tested through four experiments, "Agar Disk Diffusion" for its antibacterial properties, "Modified Tape Stripping" for its abrasion properties, "pH Test" to indicate if it is safe for human use, and "Spreadability Test" to determine its ease of application on skin. The results of the study showed that an exfoliant made from agricultural waste products was feasible and comparable to exfoliants available in the market, however it is recommended that future researchers advance the study on the use of natural products, most specially agricultural byproducts, as ingredients in making medicinal or cosmetic products more efficient.
Speaker
Speaker biography is not available.

Deep Learning Approach to Early Detection of Rheumatoid arthritis

Saket Pathak ( & Silver Creek High School, USA)

1
Rheumatoid arthritis (RA) is a chronic inflammatory disorder affecting areas such as the hands and feet. According to MedicalNewsToday, roughly 1.3 million people in the US have RA, representing 0.6 to 1% of the population. Artificial intelligence (AI) is the ability of machines to perform tasks that typically require human intelligence and is becoming more widespread in areas such as healthcare. The detection of the more common osteoarthritis has been performed using AI before, and RA detections are starting to emerge, too. However, these detection methods use X-rays and protein scans, which take time and money. Since arthritis is a disorder that happens in the joints, automating its detection using images could be done in a new, revolutionary way. To get this, two image datasets were used, the first being healthy hands with no arthritis symptoms. The second data set would contain images of nodules which are bumps on the hand for RA symptoms. The model would be created using Jupyter Notebook, TensorFlow, Keras, and Python 3.9, where the data would then go through preprocessing, scaling, and splitting for faster training. The deep learning model, a convolutional neural network, is used along with the model.fit for training. The accuracy yielded 99.48%; overall, it could classify between the two data sets. The conclusion is that classifying RA from just a scan of someone's hand could, in the future, allow for a faster diagnosis of any arthritis when it is perfected.
Speaker
Speaker biography is not available.

Exploring cybercrimes on Roblox

Maya Patwardhan (Germantown Academy, USA)

0
Roblox is an online universe in which users can create and choose from many different games to play. In 2023, the platform averaged 214 million users per month and generated USD 7 million per day. Some popular games on Roblox include Adopt Me!, Mega Easy Obby, Arsenal, Brookhaven RP, and Welcome to Bloxburg. However, is this online platform safe to use? What types of cybercrimes occur through Roblox? To find information about these crimes, I used the Google Search Engine to collect articles. I searched using an initial keyword of "roblox crimes". This resulted in numerous articles containing data about the various crimes committed, which I used again as keywords in more searchers. Altogether, I found 16 articles on news, security, and magazine websites. I found the following types of crimes happening on Roblox: hacking, beaming, data breaches, ransom, malware, scams, spreading hate in game, and in-person violence. User account can be hacked; in one case, cybercriminals hacked multiple accounts and changed the profile information to say "Ask your parents to vote for Trump". Beaming is when an account is hacked and valuable in-game, Robux, or limited-edition items are stolen and then sold. Data breaches are when sensitive information about individuals is stolen. This information contains details like names, usernames, phone numbers, email addresses, IP addresses, home addresses, and date of birth. Cybercriminals can then hold this information hostage and ask for a ransom to prevent it from being leaked online. Users can be tricked into downloading malware (malicious software) on their devices; 9.5% of malicious files are spread due to Roblox. Scams are also prevalent on Roblox. One example is when fake websites claim to give users free Robux and steal information. In another case, users paid small amounts (approximately 60 cents) for nonexistent prizes. Roblox is also used to recreate famous terrorist attacks that users could experience, which results the spread of hate. Sometimes, predators would pose as users and become friends with their targets (children) in game. They would later encourage these victims to meet in person, causing in-person violence, such as assaults. Collectively, these crimes can cause depression in kids, have user information leaked online, con users out of their money, and infect their devices. To make sure you or someone you know is safe on these online games, it is necessary to educate yourself on all the crimes that could happen when playing in and outside of the game.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-02

Poster 02 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Development of a Coconut Coir Diaper

Gabriel L. Legaspi, Marco Juhann G. Ortega, Gabriel Antonio S. Prospero, Freya Yggdrasil Soltura and Samantha Q. Tachado (La Salle Green Hills, Philippines)

3
Extensive research revealed that in 2022, the Philippines was the fourth top waste generator in Southeast Asia, and considered to be a top contributor to ocean pollution. Healthcare waste alone, generated from June 2020 to April 2022, weighed around 1,400 metric tons every day, according to the Environmental Management Bureau (EMB). The amount of garbage produced overall in the country increases due to its rapid population growth and urbanization. Due to the lack of resources, the government is not able to execute efficient waste management, which, in turn, leads to environmental and health problems. Non-biodegradable diapers contribute greatly to this healthcare waste and thus, create a need to explore biodegradable materials such as organic natural fibers like coconut coir. This research aims to assess whether the diaper satisfies the criteria set by the researchers regarding durability and absorbency. Compared to previous studies, this paper will utilize coconut coir fiber as the main material for diapers. The diaper underwent multiple treatments and tests - including intensive chemical sterilization, Gravimetric test and Rate of Absorption test. The conducted research reveals that the prototype can closely replicate the same qualities as a commercial diaper in terms of absorbency and durability. Using a t-test, the statistical analysis shows no significant difference between the prototype and commercial diaper. In addition, producing coconut coir diapers is a cost-effective approach as coconut coir is readily available in the country. Not only does this coconut coir diaper pave the way for repurposing agricultural waste potentially alleviating waste-related issues; but it also addresses the severe waste management issues that communities are currently facing.
Speaker
Speaker biography is not available.

Enhancing the Desiccation Tolerance of Arabidopsis thaliana with Proteins from Ramazzotius varieornatus

Deven R Butani (USA)

0
Drought has posed a great threat to crops around the world. The decrease in water availability within soils greatly declines crop yield and productivity, therefore negatively affecting the food crop supply. This has caused major problems for the agricultural industries and millions who rely on these crops for food sources, especially with climate change exacerbating droughts, making these dry periods longer, more frequent, and more severe. Tardigrades are microscopic organisms renowned for their remarkable resistance to various extreme conditions. One of their most remarkable abilities is their ability to survive through desiccation or drying. For protection against these water scarcities, the tardigrades utilize specific types of intrinsically disordered proteins specific to their species, also known as Tardigrade-Disordered Proteins (TDPs). One of these proteins is the Cytoplasmic/Cytosolic Abundant Heat Soluble Proteins (CAHS), located within the cytoplasm and primarily protect a tardigrade's cells from desiccation. This research aims to apply CAHS protein-expressing genes to thale cress for them to survive and thrive post-desiccation, as tardigrades have proved these mechanisms to be crucial to their survival and well-being after such an event. If the plants show increased health and yield after dry periods, genetically engineered plants with tardigrade proteins can prove to be extremely beneficial to the productivity of crops within the agricultural industry.
Speaker
Speaker biography is not available.

Balloon Car

Arden Upadya (Morristown Beard School, USA)

0
I created a car that moves by itself after air is blown into the balloon which is attached to the car to demonstrate certain aspects of physics. The car consists of a Gatorade bottle as the body, four Gatorade bottle caps as the wheels, three straws, 2 skewers, and a balloon. First, two straws are attached to the bottom of the bottle and the two skewers are put through the straws. Next, a hole is made in the bottle caps and they are put on the ends of the skewers, but they have to be able to move freely. Then, a hole is made on the top of the bottle and a straw is put in that hole and pointed to the back of the bottle. Lastly, a balloon is attached to the straw going through the hole and it is secured by a rubber band. After the balloon is inflated, the car moves forward until the balloon is deflated and sometimes a little longer after there is no air inside. The car is able to move because of the air inside the balloon which propels it forward. The energy stored in the balloon once it is inflated is potential energy, and it is converted to kinetic energy once the car starts moving. Therefore, energy is neither created nor destroyed; it is converted into different forms. This experiment also relates to Newton's Laws of Motion. Newton's First Law of Motion is displayed here as the car, which is stationary, does not move until it is acted on by the air. Similarly, when it is in motion, it does not stop until the air runs out and it is acted on by friction which makes it come to a halt. Newton's Second Law of Motion is seen by the amount of force or air that is put in which results in a different amount of acceleration and total distance. Newton's Third Law of Motion is shown once the action (blowing up the balloon) leads to an equal and opposite reaction (the balloon deflating and moving the car forward). These important principles of physics can be observed through the motion of a balloon powered car in the form of a fun science project!
Speaker
Speaker biography is not available.

Motion Planning Control of a Qbot2 – Using a Neural Network Controller

Saami Ali (Cold Spring Harbor High School, USA)

0
This project investigates the trajectory tracking motion control problem for a QBOT 2, an autonomous wheeled mobile robot. The robot will be operating as a part of wireless control system, where the control signal is transmitted to the robot wirelessly. In a wireless control system, the perturbations caused by the wireless channel can be instrumental in interfering with the feedback signal thus causing errors in the system tracking response. In this work the specific focus is on the effect of perturbations caused by the uncertain time-varying delays that are inherent in wireless communication links. To that effect, an attempt to model the delays in the feedback signal by determining the changes they bring on the closed-loop behavior, will be made. Finally, a control methodology to eliminate the tracking error caused by these delays, is developed. The tracking control methods that are generally used in wheeled mobile robots do not compensate for these uncertainties. In this work, an adaptive neural network control methodology is proposed for the robot. The approach will combine a neural network-based kinematic controller and a model reference adaptive control. The kinematic controller parameters will be updated online using artificial neural networks to force the tracking error of the robot to converge to zero. The goal is to provide both simulation and hardware implementation to illustrate the convergence of the proposed control scheme.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-03

Poster 03 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Pi Song: Discover the Harmony of Numbers and Notes

Julia Lu (Pioneer Valley Chinese Immersion Charter School, USA)

1
"Pi Song" is a project that embodies an interdisciplinary endeavor, uniting the subjects of Science, Technology, Engineering, Arts, and Mathematics (STEAM) to convert the digits of Pi into a harmonious auditory experience. The project's main innovation is that it highlights and transforms the precision of mathematics into a melodious experience, thereby illustrating the intrinsic beauty of mathematical concepts through musical expression. From a scientific perspective, we construct a unique musical instrument from Lego components integrated with an ultrasonic sensor. This ultrasonic sensor measures distance and then plays out musical notes ranging from A to G# (A, B, C#, D, E, F#, G#), allowing us to play various melodies. This functionality helps determine distances and "see" where objects are, thus translating numerical distances into specific musical notes, with distances segmented into seven distinct ranges corresponding to notes A through G#. From an engineering perspective, we utilized LEGO NXT at the core and have engineered an instrument out of Legos that plays music; every note is fine-tuned to ensure the mathematical precision of Pi lies in every note played. Technologically, a Python script was developed to transform the 100 digits of Pi into a sequence of musical notes. This digital transformation turns raw numbers into a score for the senses. We then used Flat.io to turn the notes translated by Python into a music score with actual note values. Mathematically, the challenge was to assign musical notes to the digits of Pi, which was addressed by adapting base 10 digits into base 7 to accommodate all possible values. This approach not only solved the issue of representing digits beyond G (the seventh note) but also introduced a novel method of encoding numbers into music. Lastly, from a musical perspective, the Pi Song leverages different types of notes in rhythm with different values. In our music piece, we used quarter notes (worth 1 beat) and eighth notes (worth ½ a beat) because we wrote base 10 numbers as base 7 numbers, which resulted in 2 digits. Each digit represented one number in Pi, so we made the two-digit eighth notes worth 1 beat together. Every other note would be a quarter note, each worth one beat. This methodological choice ensured the musical piece's adherence to the temporal structure, with every note imbued with the essence of Pi. The PiSong project ends with a performance that synergistically combines a classical violin with the custom-built Lego instrument, offering a multi-sensory experience of Pi through music. This innovative project not only shows the creative fusion of STEAM disciplines but also serves as an experiment to explore the educational and aesthetic potential of translating mathematical phenomena into the universal language of music.
Speaker
Speaker biography is not available.

Pronunciation correction service for individuals with hearing impairment: noncontact connection with volunteers

JungHyun Clair Park (Chadwick International, Korea (South))

2
Pronunciation plays a crucial role in shaping first impressions during initial conversations, and unclear pronunciation is often misconstrued to be closely linked to intelligence. Consequently, individuals with hearing impairments who struggle with pronunciation are susceptible to such misunderstandings. This paper aims to address this issue by describing a prototype web service that provides remote assistance for pronunciation correction among individuals with hearing impairments. The service allows individuals with hearing impairments to upload recorded audio files, which are then reviewed by non-hearing-impaired volunteers. Subsequently, feedback is provided by comparing the submitted pronunciation with standard pronunciation. Prior research has explored various methods to address similar issues, including mimicking mouth movements and automatic recognition using AI. This study analyzed these programs, considering the characteristics of the Korean language, and applied the most suitable platform and technology to create a service tailored for individuals with hearing impairments. The web service emphasizes accessibility, enabling individuals to receive pronunciation correction assistance without constraints related to time, resources, or location. The development process involved utilizing Figma for web design and coding, forming the primary technologies for the web user interface.
Speaker
Speaker biography is not available.

Block4py: make logic with blocks and then do text coding!

Christina Cho (Phillips Academy Andover, USA); Seunghoon Ryu (Seoul International School, Korea (South)); Wonjae Choi (Chadwick International School, Korea (South))

1
In many schools in Korea, students learn block coding, such as Scratch, before learning text coding. When initially learning logic, block coding without the burden of syntax is considered the most optimal method. However, when later learning text coding, the focus shifts to memorizing syntax, and the skills acquired from block coding are not effectively utilized in text coding learning. The main reason for this is that text coding involves dealing with data like numbers or characters, rather than moving sprites as in block coding. Therefore, a more effective approach to learning text coding could involve providing content in block coding that can handle numbers or characters, creating logic for problem-solving in block coding first, and then learning Python syntax corresponding to the blocks. This approach could make learning text coding easier and more enjoyable for many students. In line with this approach, a website for learning Python has been created (https://block4py.org). The site presents problems, allows users to initially create logic with block coding, explains the corresponding Python syntax for the blocks, and guides users to perform text coding in Python. The website is currently in the minimum viable product stage, and feedback is being gathered from friends and other students. Based on this feedback, the plan is to address any issues and publish the site as an easy-to-learn platform for everyone.
Speaker
Speaker biography is not available.

White Line Detection System for Safe Crosswalk Pedestrian Movement of Visually Impaired Individuals

Joonwoo Bae (Seoul International School, Korea (South))

0
This research explores the development of an assistive device using smart glasses to enhance the mobility of visually impaired individuals while walking and crossing roads independently. Leveraging the camera and sensor functions embedded in smart glasses, a computer coding system was devised to aid the visually impaired in crossing roads accurately. The system utilizes the smart glass's camera to capture images every 1-2 seconds, transmitting them to a smartphone. The smartphone, equipped with a YOLO tiny model, identifies the white line on the crosswalk floor, triggering a voice alarm on the smart glass. This technology effectively alerts visually impaired individuals if they deviate from the correct path. The approach involves collecting and training the system with images of white lines to enhance its accuracy in line detection. The resulting system offers real-time guidance for visually impaired individuals, significantly improving their ability to navigate road crossings. Future endeavors include the development of a smartphone app incorporating the white line detection algorithm to further assist individuals with visual impairments.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-04

Poster 04 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Beware the Hype around Information Technology!

Hamza Shoufan (Amity International School Abu Dhabi, United Arab Emirates)

0
Today, living in the Information Age, we use IT for almost everything in our lives. We use many versions of it, each designed for a specific function. We use social media to wirelessly communicate with people on the other side of the globe, apps like Word and PowerPoint to create content, and CAD and CAM technologies to design and manufacture. However, many technologies initially promise great success but never mature. So, how should we behave when a new technology comes out? The Hype Cycle is a good starting point. Gartner's Hype Cycle for Emerging Technologies has five stages. The first one is the ‘Innovation Trigger,' when people are triggered and excited about the new tech release. The second one is the ‘Peak of Inflated Expectations,' when people reach the peak of their expectations for the new technology. Thirdly, we have the ‘Trough of Disillusionment,' when the technology starts to find challenges and failures and its public interest declines. The ‘Slope of Enlightenment' is when people start to realise the technology's real potential and have actual, realistic expectations. Finally, the ‘Plateau of Productivity' is when the technology becomes mainstream and widely accepted and used by organisations and businesses. Here, trending technologies in the world currently, with a main example being generative AI are at the pinnacle of the second stage: the ‘Peak of Inflated Expectations.' And the big secret lies in the name of the second stage: the word inflated. Think about people's thoughts on ChatGPT right now. You would probably expect to hear things like: ‘it will change the world completely,' and ‘it will become essential for survival.' Upon thinking realistically, one who says these things would realise these expectations and thoughts are exaggerated, or, as mentioned in the name, inflated. Inflated expectations are mostly supported by enthusiasm from the media and initial users. The more people using the technology, the more insight they get into its shortages, which makes them lower their expectations and probably reduce usage or quit entirely. This is a big blow to smaller developers due to less revenue, but bigger developers such as OpenAI are not affected by such issues, as they can pay more money to further develop and integrate more features into their IT systems. This interaction between users' experience, expectations, and usage behaviour on one hand, and the developers' investment, development, and optimisations on the other, help improve the technology and raise people's expectations to a reasonable level again, allowing the developers to guide it to the ‘Plateau of Productivity' phase. In conclusion, we should not set too high expectations for new IT technologies just because of media and initial users' reviews. One should realistically and fairly rate these technologies, as they probably would not meet the high standards wanted by the people who are affected by the hype. As students, we should not expect that ChatGPT can solve every homework assignment that we have, and above all, we should not expect that it is there to help out with such assignments.
Speaker
Speaker biography is not available.

Predicting Grants for Hurricane Affected Homeowners Using Machine Learning Methods

Sumukh Venkatesh (USA)

0
In recent years, the escalating frequency and intensity of hurricanes have become a pressing concern due to the impacts of climate change. While homeowners of all walks of life have been affected by these increased damages, minority and low-income homeowners bear a disproportionate amount of the damage. Due to extensive hurricane damage, homeowners often receive grants aimed at assisting in the recovery and rebuilding process. These grants can encompass compensation, additional support for lower-income homeowners, elevation funds, and provisions for individual mitigation measures. Leveraging individual-level records sourced from the Louisiana Division of Administration via ProPublica, the research aims to predict the total amount in grants that homeowners receive and looks into the variables that have the greatest impact on the model, isolating those that can indicate possible bias in the distribution of aid. I believe that the machine learning techniques can predict the grant allocation to make the system more practical and effective for homeowners. The research used the following algorithms: XGBoost, Random Forest, Support Vector Machine, K-Nearest Neighbors and Logistic Regression. The Random Forest algorithm yielded the greatest accuracy, with an R-squared value of 0.893 for the final amount of grants received. By examining data and applying machine learning models, this study enhances understanding of post-disaster grant distribution, aiding decision-making for disaster relief organizations and policymakers. Furthermore, this research allows homeowners to predict if they will be able to meet their housing needs due to hurricane damage and exposes possible inequities in the grant allocation process.
Speaker
Speaker biography is not available.

Enhancing In-Cabin Monitoring Performance using Unity Eyes Generated Data

Raymond R Kim (Korea International School, Korea (South))

1
Research in autonomous driving has been gaining increasingly more attention, since the introduction of electric vehicles. Autonomous vehicles are required to conform to the levels set by the Society of Automotive Engineers (SAE), which made driver monitoring a legal requirement. In the currently allowed level, the car must be in the park gear to access its infotainment system, but to advance to the next level of autonomy, where the gear state is not monitored, the driver's state must be monitored. Facial scanning is the first step in determining the driver's current state, especially for drowsiness. However, it is difficult to acquire the data required as using such data would be an invasion of privacy. This paper aims to overcome the challenge of acquiring training data by generating the data with Unity Eyes, which would enable enhanced performance. A model with a ResNet50 backbone achieved a 66.0% accuracy when trained with a limited real dataset, whereas the same model trained with generated Unity Eyes data achieved 85.3% accuracy. Our ablation study showed that the use of Unity Eyes data is more effective than known pre-trained models as well. This study demonstrates the effectiveness of generated data in situations where large-scale data is impossible and suggests potential future applications in a variety of studies.
Speaker
Speaker biography is not available.

Using the Swin-Transformer for Real & Fake Data Recognition in PC-Model

Jiyoon Park (Branksome Hall Asia, Korea (South))

0
Recently, due to the rapid development of generative AI technologies, the use of AI-generated images has increased significantly, making the distinction between real and fake images crucial. Generative images may be used in various ways such as data training and fast image generation, but a potential for misuse, such as in Deep fake or spreading false information, still exists. This study explores a novel model using the architecture of Swin-Transformer to distinguish between fake and real images generated based on CNN (Convolutional Neural Networks) and GAN (Generative Adversarial Networks). The Swin-Transformer, a successor model of Vision in Transformer (ViT), applies the structure of the Transformer, which has shown outstanding performance in natural language processing, to the field of images and demonstrates excellent pixel-level segmentation performance. Real and fake images require detailed pixel-level analysis, in which the Swin-Transformer exhibits higher accuracy. Improving the performance of distinguishing between real and fake images is expected to set limits on indiscreet image generation, bringing further effects such as preventing the indiscriminate use of AI images through program-based discrimination/legal sanctions.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-17

Poster 17 — Poster On-site

Conference
10:30 AM — 3:15 PM EST
Local
Mar 9 Sat, 10:30 AM — 3:15 PM EST

Improving Sensing and Data Collection of Research for Lucid Dreaming

Alessandra V Manganaro (Winchester High School, Winchester, MA, USA)

0
Lucid dreams occur when the subject is aware that they're dreaming while still asleep. It is a state of REM sleep, typically characterized by heightened activity in the frontoparietal region in the brain (associated with self-reflection and memory) comparable to during wakefulness. Growing attention, particularly in neuroscience, has been recently paid to this topic because of its links to the therapy of some neurological disorders, as well as for its potential to unlock or enhance cognitive and creative abilities. Yet, lucid dreaming remains relatively understudied due to the difficulty of collecting adequate data from subjects in a lab setting. These difficulties include the challenges of forcing subjects to reliably induce lucid dreams, disruption by unfamiliar surroundings, and the limitations due to the lack of individuals who could be considered ‘proficient enough' in the skill to have it accurately studied. Lucid dreams can be induced using various cognitive exercises usually after disruption of the REM stage (like waking up some hours after going to bed), taking certain drugs like galantamine, or triggered by specialized devices. Devices, like the Remee Lucid Dream Mask, have been created to help subjects achieve lucidity using visual and auditory patterns associated with being asleep in order to allow the user to recognize this during the REM phase. However, these products are expensive and have been shown to be overall ineffective, disrupting sleep more than achieving the goal. With the recent popularity of wearable life-sign monitoring devices, mostly aimed at physical fitness, non-invasive wearable brain monitoring devices are also being commercialized. For instance, the startup Neurable created a special headphone, inclusive of electroencephalogram (EEG) sensors with accuracy comparable to clinical-use equipment, with the intent to play selected music based on the user's measured EEG state to promote and boost concentration. This poster aims to give an introductory overview of recent reputable peer-reviewed results on the topic of lucid dreaming and to point to some available wearable brain-sensing devices and their characteristics. It is conceivable to develop some mix of the prior mentioned precedents to create a more dependable device that aids subjects in achieving lucidity while allowing equipment like those related to polysomnography or electrode bands and the knowledge in their use to be accessible for subjects to use in the comfort of their own home. This could aid academic researchers in the oneirological field by enabling access to a larger and richer volume of data, in and outside of the clinical environment. Greater understanding of lucid dreaming can help promote future projects including gaining a better grasp of the conscious, the role and cognitive contributions of dreams in waking life, and the ability to better aid patients who have gone unresponsive after experiencing brain injuries.
Speaker
Speaker biography is not available.

Using Prompt Engineering to Enhance STEM Education

Max Z Li (The Pingry School, USA)

1
With the advent of large language models (LLMs), such as ChatGPT, Gemini and LLaMA, AI will forever change how education works. Many have been quick to point out the potential of using these language models for use in academic dishonesty, which is a tangible problem. However, there is large potential for legitimate use in education. In a review article published in European Journal of Education in 2023, Zhang and Tur concluded that "ChatGPT has the potential to revolutionize K-12 education through the provision of personalized learning opportunities, enhance learner motivation and involvement". Complex topics in STEM can be difficult for anyone to understand. AI enabled personalized and interactive learning can help students get interested in STEM and learn at their own pace and capability. These models can be used as educational aid as it provides the unique capability of being able to respond to questions using natural language, thus making material much easier to digest step by step for a K-12 student. However, there is a gap between the student and the LLM. The prompts given to LLM need to be well designed in order to be effectively utilized for education. To use LLMs more appropriately for educational purposes, we propose a tool to fully utilize the educational potential of LLMs and reduce usage for academic dishonesty. The tool would have a student register by giving the grade that they're in as well as any topics they'd like to learn more about. Using prompt engineering techniques, the tool can prompt LLMs to produce educational content such as AI-generated quizzes and overviews, as well as simplifying complex topics further to aid in understanding. For example, if I had trouble with the Pythagorean theorem, the tool would generate a well-designed prompt for the LLM, such as "I am a 9th grade student learning the Pythagorean theorem and you are my teacher. Give me an overview of the topic as well as a practice quiz ..." With detailed prompts, a LLM can provide the necessary resources and explanations that a student would need to learn a topic effectively. Due to the nature of these models, the students could also easily ask follow-up questions about a topic to further understand it. The tool could also add more guidance into the prompt, such as forbidding the LLM from directly providing answers to homework/test problems. The tool can also search and provide examples and figures from other sources. The AI-enabled tool, effectively a virtual mentor, could help propel STEM education further and make STEM more interesting to student as it could help explain complex topics in a way that students understand. By using LLMs in education, we can help that students understand a topic through interactive practice instead of just memorizing facts and putting them on a sheet of paper. We will demo the tool and results with our poster.
Speaker
Speaker biography is not available.

Comparing Single-Cell Modality Prediction Performance Across Different Machine Learning Models

Dabin A Chae (Manhattan High School, USA)

0
Every cell of an organism contains the same genetic information, yet each cell is differentiated during development by a process known as gene expression. Genes are expressed to form specific cell types, each with unique traits, such as skin and nerve cells. Gene expression begins when mRNA is produced (transcribed) from open accessible areas of DNA strands. This new strand of mRNA is then translated into various proteins, which perform many functions within the cell. However, these processes are interconnected; the level of proteins regulates gene production and expression through post-translational modifications, which in turn can inhibit the opening of DNA for transcription and reduce the number of mRNA strands created. Today's machine learning techniques aim to understand the flow of information from DNA to RNA to protein in this regulatory cycle, which can provide insight into the origin of diseases. Yet, most measurements of cellular systems consist of a heterogeneous population of cell types. For example, a tumor sample taken from a patient may contain cancerous cells in addition to skin cells, benign cells, and other non-important cells. Analyzing these samples risks generalizing and masking the significance of individual cells. Single-cell datasets are used to understand the specific genomic information regarding the modalities of each cell type. However, collecting such data is resource-intensive, and cells can only be measured once, leading to sparse and noisy datasets. In addition, the modalities - DNA, RNA, and protein - are represented differently from each other, meaning we cannot simply merge them to create one standardized dataset. Relating the modalities to each other can help scientists picture the regulatory cycle of gene expression, but requires more data or a model that can accurately predict one modality from another. In this study, we create and test several predictive model architectures that predict surface protein levels from gene expression. Each model architecture contains relatively fewer parameters compared to those found in the Kaggle and OpenProblems competitions to determine which type of model would perform best without regard to hyperparameter tuning, number of layers, learning rates, etc. We employ CITEseq data, a method that simultaneously measures protein and mRNA expression, taken from three healthy donors across 7 different cell types containing 22,500 different gene expression levels. Machine learning models were trained on the gene expression levels of two donors to predict the protein levels of the third donor and we evaluated them in terms of Mean Square Error (MSE). Out of the six models tested, the Lasso and Neural Network models had the best prediction performance, as their MSEs are 3.15999 and 3.13334 respectively. Compared to LightGBM (MSE ≈ 3.26653) and Attention-based (MSE ≈ 4.76837) models, these are relatively simple models that are widely used in regression tasks that do not need a lot of training data. The results indicate the potential of utilizing nonparametric approaches to overcome the sparsity of single-cell datasets and uncover the underlying biological characteristics of converting genotypes to phenotypes.
Speaker
Speaker biography is not available.

IBM Platform's Role in Resolving Adaptability Issues in Online Education Through AI Machine Learning

Jingxi Wang (Amer, USA)

0
According to Oxford College, online learning has increased almost 900% from when it was first introduced in 2000. In recent years, it has transformed into not only a trustworthy way to receive traditional schooling but also, additional courses outside of school. Despite this increase, the effectiveness of online education against traditional in-person instruction remains a critical issue. It is essential to continue improving online education systems so that the growing reliance on online educational platforms is well-placed. My hypothesis suggests that AI/Machine Learning techniques can be used to highlight the shortcomings of online learning, showing possible ways to improve online learning to match traditional brick and mortar schools. This study utilized data from a Kaggle repository incorporating 13 features: subject region of residence, age of subject, time spent on online class per day, medium for online class, time spent on self-study, time spent on fitness, hours slept every night, time spent on social media, preferred social media platform, time spent on TV, number of meals per day, change in your weight, health issue, activities to relieve stress, aspect most missed, time utilized, and connection to family to investigate each student's satisfaction with online schooling. The research involved 1182 students of different age groups from schools across the Delhi National Capital Region, utilizing the IBM platform to deploy a variety of algorithms for the creation of predictive models. Random forest classifier, logistic regression, and decision tree classifier with and without enhancements were employed. They achieved moderate accuracy levels, above 50 percent. Additionally, each algorithm highlighted feature significance: subject age (100%), aspect most missed outside of online education (98%), time spent in online classes (73%), activities done to relieve stress (65%), and time spent self-studying (46%) being identified as the most crucial in the random forest classifier model. Extensive research was conducted on the notable features, and strong correlations were identified among them, demonstrating high accuracy in predicting satisfaction with online education across all algorithms. To provide a comprehensive comparison, the study experiments with altering the amount of data folds and presents Receiver Operating Characteristic (ROC) curves, F1 scores, and confusion matrices. A complete analysis on machine learning results, including methods for improved accuracy, will be presented on the poster. Also included is a thorough look at the results through an academic perspective, and how the features can be incorporated to improve the quality of online education. Furthermore, attention will be given to the methodology utilized on IBM Watson, underscoring the advantages of cloud-based platforms for creating AI/ML based predictive models.
Speaker
Speaker biography is not available.

Analyzing the Health of Lithium-ion Batteries through Heat Distribution and Thermal Modeling

Rohit Karthickeyan (John P Stevens High School, USA); Sushanth Balaraman (Edison High School, USA)

0
For decades, the primary focus in battery health assessment has been on metrics such as voltage levels and current flow. However, ThermoBatt shifts the lens towards the thermal attributes, a domain less explored but equally vital. ThermoBatt encompasses two innovative models: the first, a machine learning algorithm, predicts the State of Health (SOH) and Remaining Useful Life (RUL) of batteries by analyzing factors such as ambient temperature and usage cycles. The second, a real-time temperature distribution model, utilizes temperature data within charge/discharge cycles to simulate thermal behavior. This approach necessitates several assumptions, underscoring the pioneering nature of our exploration. ThermoBatt aims to deepen our understanding of how heat generation and distribution influence battery health and longevity. By bridging this knowledge gap, our work illuminates the interconnectedness of thermal dynamics with battery efficiency and endurance, paving the way for advancements in battery technology and sustainable energy solutions."
Speaker
Speaker biography is not available.

Exploring Cybersecurity Through Authenticating Wireless Communication for Mini Tank Robots

Andrew Y. Lu (Oyster River High School, USA)

6
Controlling robots using wireless methods is a big discovery. With Bluetooth, we can control robots to do things in places where people cannot access. But when using robots with wireless connections, other people can intercept and interfere with the robot. In this poster, I will introduce a summer STEM camp that was hosted by the University of New Hampshire to present my first cybersecurity exploration experience. In this program, I learned principles of Bluetooth and saw the security vulnerability of wireless communication via hands-on projects. Bluetooth is a short-range, wireless technology for exchanging data between devices over short distances. All the programs were run on a Ks0428 keyestudio Mini Tank Robot V2. All control logic and operation algorithms were implemented and debugged in an Arduino Integrated Development Environment (IDE). I used a BLE scanner to connect to the Bluetooth communication module on the Mini Tank Robot. I also learned how to use Bluetooth to send messages and code the robot to do different actions upon receiving the messages. I observed how an outside attack interfered with the robot through the Bluetooth module. If we don't add authentication to the receiving end, people can hack into our system and take control. When using the most basic and intro form of authentication, it could be easily cracked by using brute force. To examine more secure methods, I tried three security mechanisms in the Mini Tank Robot: (1) security message based authentication, (2) instant one-time passcode-based authentication, and (3) symmetric key cryptography based authentication. This eye-opening STEM experience inspires me to explore more cybersecurity issues in robot design. I would like to share my experience with other students who are also interested in STEM and technology.
Speaker
Speaker biography is not available.

Artificial Intelligence-Based Traffic Signal Control for Urban Transportation Systems

Minghan He and Pablito Lake (Rutgers Preparatory School, USA)

0
Extended Abstract Optimizing traffic signal control is crucial for the smooth operation of urban transportation systems. The challenge is to minimize vehicle delays and emissions, requiring a deep understanding of traffic dynamics at intersections. Traditional algorithms determined the number of vehicles stopping at an intersection by subtracting the flow of non-right-turning vehicles entering from the flow of vehicles exiting. Methods based on this are not taking the full dynamics of traffic behavior into consideration. Their performance is thus limited. Acknowledging the significance of this fundamental aspect, our research aims to introduce a novel algorithm. This innovative approach goes beyond conventional methods, integrating the dynamics of starting and stopping cars, with the goal of surpassing the limitations of previous solutions. Our work stems from the recognition of the pivotal role that efficient traffic signal control plays in urban transportation systems. With a focus on minimizing average vehicle delay within a specific timeframe, we aspire to make a meaningful contribution to the establishment of a sustainable and intelligent traffic management system. The motivation behind the project lies in the pursuit of a harmonious balance between rapid vehicle throughput and reduced environmental impact. Our approach involves designing a variety of solutions by blending traditional traffic engineering principles with our innovative algorithm. In the proposed model, history traffic data are used to train the traffic prediction for the future timeslots. Instantaneous intersection dynamics are also input into the model to trigger the parameter update and traffic prediction. This learn-and-predict AI process improves the model adaptation to changes. It tolerates local errors occurring in a short timeframe without sacrificing the overall performance. Prototyping plays a crucial role in testing and refining our approach. Using VISSIM software with AI methods, simulations have confirmed the effectiveness of our model. Additionally, community engagement activities, such as workshops and demonstrations, offer valuable real-world insights, influencing the development of our methodology. The prototype and simulations demonstrate promising results, highlighting a reduction in both average vehicle delay and the number of start-stop cycles. These findings align with our objectives of promoting efficient traffic flow and minimizing emissions. The positive feedback received during community engagement activities further confirms the real-world potential impact of our approach. The proposed algorithm can be refined based on additional real-world data. Collaborating with local authorities for potential implementation in select intersections is a key objective. The next phase will involve leveraging advanced deep learning algorithms to iteratively improve and evolve our model, ensuring it remains at the forefront of traffic signal optimization.
Speaker
Speaker biography is not available.

Solar's Future: Spin-coating Fabrication of Perovskite Solar Cells & Characterization of Effect of Interface Addition

Bowen Hou (USA)

0
Perovskite solar cells (PSC) have the potential to convert more sun energy than ever into electricity as it breaks traditional silicon solar panels' Shockley-Queisser limit. However, because of its extreme fragility and difficult fabrication process that usually requires a nitrogen glovebox, commercialized production of PSC wasn't yet industrialized on a large scale. This research aims to improve the fabrication by completing the fabrication process in ambient air (humidity above 50%) and investigating the effect of adding an interface layer to protect PSC from fast degradation. The solar cells in the control and the interface-added group will be characterized further to determine their efficiency and surface morphology.
Speaker
Speaker biography is not available.

Mirror Posture Detection Using Roll, Pitch, and Yaw Angles and an Error Equation

Chongwei Dai (PRISMS Research, USA)

0
Indoor exercises, which can enhance muscle strength and cardiovascular function, are becoming increasingly popular, but people are continuously suffering from injuries due to incorrect exercise postures. The method of letting professional trainers take direct observation has its limitations. Therefore, a new approach will be used with the emergence of newer technologies: mirror to detect postures. The mirror is designed to detect incorrect postures in patients while exercising, which is attained through an input of selective body exercises using coordinates and the algorithm that detects the error in the posture. The customers of the mirror encompass a broad spectrum of individuals seeking personalized rehabilitation, including those working from home, rehabilitation patients, and individuals passionate about their health and well-being. Additionally, the mirror extends its appeal to corporate wellness programs, further diversifying the customer base. The mirror's unique focus on rehabilitation sets it apart from traditional fitness mirrors, appealing to those with specific pain points in the rehabilitation process. The product addresses the challenges associated with sedentary lifestyles and the need for efficient rehabilitation solutions for individuals working from home. Rehabilitation patients, a crucial target segment, find value in the mirror's ability to simplify the complex rehabilitation process, providing motivation, expert monitoring, and efficient at-home exercises.
Speaker
Speaker biography is not available.

Rolling Across the Continents: Phylogenetic Relationships of the Isopoda

Evan Kang (Princeton High School, USA)

0
Isopods are a very diverse group of crustaceans, having colonized habitats from the ocean floor to treetops in tropical forests. Terrestrial isopods comprise a major portion of this diversity with approximately 5,000 species, yet their evolutionary relationships have not been widely examined. With genetic sequencing techniques becoming more widely available, this group of students in the Princeton High School Research Program set out to determine how isopods have evolved and diverged since the Cretaceous Period, when the earliest terrestrial isopod fossils were set in amber. Our primary focus has been a portion of the cytochrome c oxidase subunit I gene, which is frequently used to differentiate species via DNA barcoding. DNA extraction was conducted through the use of a Quick-DNA Tissue/Insect Miniprep Kit (Zymo Research), followed by polymerase chain reaction (PCR) and Sanger sequencing, after which sequences were compared through the online program ClustalW2 in order to generate a phylogenetic tree. Preliminary results suggest that much of the accepted phylogeny in terrestrial Isopoda needs revision, because many of the taxonomic classifications based on morphology do not align with the results of our genetic investigation. This suggests that our current understandings of isopod evolution are incorrect and that further genetic investigation is warranted to better understand this understudied arthropod group through the sequence of additional gene fragments and comparisons of living species to samples of fossil isopods to establish a molecular clock that will potentially inform when different groups of terrestrial isopods may have diverged.
Speaker
Speaker biography is not available.

Dynamic Duos: Investigating the Composition of Powerful Pairs in Basketball with Network Analysis

Neel Iyer (High School, USA)

0
With the increase of analytics in basketball, research has started to focus on team chemistry via novel player roles that dynamically emerge within teams. According to (Fewell et al., 2012), basketball teams can be represented as networks and explored to find relationships between the individual players and team chemistry. In addition, related research (Hedquist, 2022) also challenges the ability of traditional player roles (such as point guards, shooting guards, etc.) to capture the essence of the roles of players in a team. As a result, traditional views of team composition are limited and don't provide enough insight to managers when optimizing for team dynamics. This paper examines the composition of high-performing duos to better capture the nuanced view of player importance from a team perspective. The methodology of this paper consisted of creating network diagrams (players as nodes, passes as edges) for 14 out of the 16 playoff teams for the 2021-2022 NBA season using NetworkX. Our data was sourced using the NBA API and SportsReference. Once the networks were created, we computed a weighted centrality measure for each player, taking into account a player's betweenness centrality (a measure of a player's impact on the flow through the team), a player's player efficiency rating (a common statistical measure for a player's performance) and a player's assist ratio (measuring their indirect contribution to the team). With these measures, we ordered the players to select the player with the highest weighted centrality and their neighbor with the next highest centrality measure. We called these pairs high-performing duos. We then used k-means clustering to identify the broad player roles predominant in these duos. Our findings show that while 50% of duos consisted of a high-value player (evaluated using Value Over Replacement Player) and strong assist ratios as well as shooting abilities, 75% of duos were characterized by the presence of an agile support player. This demonstrates that examining individual high-performers does not provide as nuanced a view as taking into account their dynamic with other support players. Examining duos is one way to provide insight into the team dynamics that may exist within team networks. Findings from this research can be used by managers as well as analysts who are looking to better understand and estimate player contributions and importance from the perspective of team dynamics.
Speaker
Speaker biography is not available.

Trichotillomania Video Detection and Reduction Therapy

Rachel Guhathakurta (USA)

0
Trichotillomania is a hair-pulling disorder that involves irresistible urges to pull hair from the scalp, eyebrows, eyelashes, and other areas. It ranges in severity, from a mild nervous habit to being physically, emotionally, and socially debilitating. Reinforcement training can be effective in stopping unwanted behaviors. This paper outlines the creation of a program that utilizes machine learning and Tensorflow to indicate to the user that they are hair-pulling. The system is especially effective because it can legitimately hold the user accountable through video detection, instead of relying on users reporting their hair pulls. Images of hair pulling in seven different locations of the head (top of the head, hairline, left of the head, etc.) were divided into seven folders, with 900 images in each folder. Postures one, five, and six performed with 75.0, 83.9, and 81.2 percent accuracy respectively. Within each subset of data, lighting and backgrounds were diversified. This technology can apply to numerous other damaging habits, such as nail biting and scratching, thus providing an additional approach to destructive habit reduction therapies.
Speaker
Speaker biography is not available.

DIY pH Indicator

Dia G Sharma (Middle School, USA)

0
pH is a huge part of people's lives all over the world. Ensuring the water you drink isn't too acidic or too basic is crucial for your health. My project will review the importance and the basic information about pH, the problems associated with it, and how to test the pH of different liquids. There is an extraordinarily simple way to make your own pH indicator that works by color signals. This is done with only two ingredients that are commonly found at any local supermarket - red cabbage, and water! I will teach how to make a pH indicator and the impact it can have on countless lives throughout the globe.
Speaker
Speaker biography is not available.

Using UV Lights to Extend the Lives of Strawberries

Leo Dobrinsky (USA)

0
In 2021, 9.2 million tons of strawberries were produced globally; out of those, 5.8 million were wasted. This is no surprise considering that, unlike other fruits, strawberry's shelf life is only up to seven days (if treated right). This does not include the time it takes to transfer them to the store, and then to the consumer. My solution is to extend the life of strawberries by using ultraviolet light. Typically, stores try extending the life of strawberries by using temperature and humidity control. This is not very successful, which is why I have decided to use UV-C light, which kills harmful bacteria, mold, and yeast. Not only will this prolong the life of strawberries, but it will also preserve the freshness, flavor, and nutrients. In many third-world countries, people who do not have refrigerators would benefit from this project even more. I also plan to use UV-B light which has similar properties as UV-C light but has a different wavelength. This will not only add extra storage life to the strawberries but also will give them extra nutrients (The US Department of Agriculture showed it by enhancing the quality of cabbage). My approach is complex. I plan to combine different UV lights, visible light, and environmental factors, such as humidity and temperature, to experiment with the best scenario for strawberry preservation. This might create a protocol for the future of strawberry storage. Due to this work, I hope that supermarkets will have delicious strawberries to store and sell long after harvest. Thus, my project will reduce the multi-billion-dollar waste of both money and food; the cost of fruit for consumers and farmers alike; and lastly it will minimize environmental damage to our planet. This is not just about saving strawberries; it is about creating a model for a more sustainable food system.
Speaker
Speaker biography is not available.

Development of an Alzheimer's Resource Website for Young Students, with Information and Python Functions for Data Manipulation, Machine Learning, and Brain Image Manipulation

Anabel Sha (Poolesville High School, USA); Amy Watanabe (Montgomery Blair High School, USA)

1
Early Onset Alzheimer's disease (EOAD) is a rare but devastating form of Alzheimer's disease that impacts younger adults, generally 60 years of age or less. It is thought to affect between 220,000 and 640,000 Americans, beginning between age 45 and 64. Most of such cases do not run in families and hence can appear unexpectedly in any adult and impact the family in subtle ways. We set out to create a resource for high school students to understand what the disease is, what are its symptoms, and resources to help their loved ones. Additionally, we have developed a library of Python functions to analyze publicly available data sources, create Machine Learning models, and display and analyze brain images. We hope to continue developing this resource and convert it into a public website.
Speaker
Speaker biography is not available.

AI-powered firefighting robot to manage high-risk situations while improving standard fire response time -Robot FireX

Siyona Lathar (School, USA)

2
The purpose of the firefighting robot is to detect fires in nearby areas and assist firefighters in dangerous situations by helping them quickly extinguish the fire. It can also prevent household fires which occur in an average of 358,500 homes each year (NFPA). More than 3,000 Americans die in fires each year (FEMA). Fire Response Time (FRT) becomes very crucial for such situations which really help save lives and improve the chances of overall damage control. According to NFPA (National Fire Protection Association) Standard 1710 establishes an 80 second "turnout time" and 240 second "travel time" which together makes 5 minutes and 20 seconds. FRT is dependent on how soon it has been reported and based on when the Standard 1710 is triggered to meet the deadlines. In fire situations many accidents are reported by affected residents at a very later stage or in some situations the incidents are reported by people outside the premises, since no humans are inside. This is where I saw Artificial Intelligence, Robotics and its powerful sensor based vision capabilities coming into play to create the ‘Robot FireX'. Idea is to create a reliable system that can accurately locate and sense either of three signals: excessive heat, smoke and the sound of a fire alarm. By recognizing any of these signals, AI is a crucial first step for the ‘Robot FireX' to capture the signal and immediately send an alert to a mobile number, simultaneously start moving in the direction of fire to extinguish it with the help of water or carbon dioxide spray. The goal is to put out the fire immediately and as effectively as possible, minimize property damage and reduce the amount of lives lost each year in fire incidents. The ‘Robot FireX' can be designed in various sizes depending on the situation such as homes or industrial sites. Plus more AI specific capabilities can be activated in ‘Robot FireX' to bring further refinements, so that it will be able to work in the unpredictable environments to spot high-risks like gas leaks and send improved alerts such as live videos, images of the accidents and send GPS coordinates of the site to the fire department.It can also become a very crucial aid to be used as a First Response Team. When used by firefighters, it can prove to be an extremely valuable tool in evaluating the entire scene and eliminating any threats before bringing the whole situation under control.
Speaker
Speaker biography is not available.

A Mathematical Proof of a Card Trick and its Algorithmic Applications in Computer Science

Rishi Balaji (Stanford Online High School, USA)

0
The goal of this project is to use mathematical principles to show how a ‘magic' card trick works. The card trick involves a specific number of cards transitioning between a deck and being laid out in a grid-like format. It then utilizes a set of repeated steps to move the spectator's card to the middle of the deck, so the performer can reveal the center card to the audience. The paper uses similar math-based ideas to further provide a generalization for the trick, proving that it should work for any amount of cards, given some restricting guidelines. Using the card trick as a basis, the project further expands on concepts inspired by the trick for more practical applications, such as how it can be be used in fields like computer science, such as performing operations such as sorting and transitioning arrays between one and two dimensions, similar to the process in the card trick. This can show that even using simple, ordinary things like a card trick can result in new possibilities for more advanced topics, which can be useful in areas such as those in STEM.
Speaker
Speaker biography is not available.

Protecting Shorelines with Triply Periodic Minimal Surface (TPMS) Inspired Breakwaters

Alex Yang and Michael Wen (USA)

1
Breakwaters have been used for millennia to reduce wave impact. Breakwaters are coastal structures that aim to disrupt waves by reducing their wave energy, and their abrasive impact on the shoreline. The force generated from waves gradually eroded shorelines. Traditional breakwaters have been proven useful for protecting the shorelines, yet the drawbacks of such breakwaters, including their impact on the surrounding ecological system, difficulties in maintenance, and interference with fish migration, cannot be ignored. Breakwater designs have remained relatively static, with many breakwaters comprising mound or wall-based configurations. This study aims to innovate existing breakwater architecture by exploring the use of Triply Periodic Minimal Surfaces (TPMS) structures as breakwaters. TPMS shapes are three-dimensional periodic manifolds chosen for their mathematical simplicity, mechanical strength[1], cost-effectiveness, and ecologic friendliness. This research employs Computational Fluid Dynamics (CFD) simulation methods to explore the effectiveness of different TPMS structures in reducing the amplitude and group velocity of incoming waves. The effectiveness of each structure is compared with other TPMS structures with modified design parameters as well as with certain traditional breakwater designs with identical height and volume, namely a commonly deployed lattice design[3]. OpenFoam software is used as the primary computational tool to simulate wave impact with OlaFlow[4] being the primary solver. MSLattice[2] is employed in the creation of TPMS structures. This investigation aims to explore the feasibility of TPMS breakwater and give rise to a new generation of breakwater architecture incorporating TPMS structures. [1] Oraib Al-Ketan, Dong-Wook Lee, Reza Rowshan, Rashid K. Abu Al-Rub, Functionally graded and multi-morphology sheet TPMS lattices: Design, manufacturing, and mechanical properties, Journal of the Mechanical Behavior of Biomedical Materials, Volume 102, 2020. [2] Alketan, Oraib & Abu Al-Rub, Rashid. (2020). MSLattice: A free software for generating uniform and graded lattices based on triply periodic minimal surfaces. Material Design & Processing Communications. 3. 10.1002/mdp2.205. [3] Dang, B. Nguyen-Van, V. Tran, P. Wahab, M. Lee, J. Hackl, K. Nguyen-Xuan, H.(2022, April). Mechanical and hydrodynamic characteristics of emerged porous Gyroid breakwaters based on triply periodic minimal surfaces [4] Higuera, P. (2018, June). CFD for waves
Speaker
Speaker biography is not available.

2D to 3D spaces using straight lines

Juliette Hancock (Goetz Middle School, Jackson, NJ, USA); Jeanine Hancock (Goetz Middle School Jackson, NJ USA, USA)

0
2D to 3D Curves using Straight Lines Authors: Juliette Hancock, Jeanine Hancock, Valentina Sandoval and Caleb Sandoval Hyperbolic Paraboloid Our project explores how to create curves using straight lines in two-dimensional and three-dimensional spaces. Two-dimensional Parabolic Curve In two dimensions, we are going to use pencil and paper to create parabolic curves using straight lines. A parabolic curve is a U-shaped curve that is formed by intersecting lines to equally spaced points. Then, we are going to sew colored string on the parabolic curves to create string art. Three-dimensional Hyperbolic Paraboloid We plan to expand our two-dimensional projects into three dimensions by creating two different types of hyperbolic paraboloids. A hyperbolic paraboloid is a saddle-shaped structure that has both convex and concave curves. In the first project, we are going to use long coffee stirrers to create a hyperbolic paraboloid sculpture that is embedded in a tetrahedron, or triangular pyramid. In the other, we are going to use sliceforms to create hyperbolic paraboloids. Applications Hyperbolic paraboloids are used in everyday life. We see examples of it in food, bridges, roofs, and apparel. ● Pringles potato chips use the hyperbolic paraboloid shape to perfectly stack their chips in a cylinder, which protects their chips from breaking and uses less shelf space. It also gives the consumers more chips per container. ● In architecture, many structures use hyperbolic paraboloids. A church in Jackson NJ (our hometown), St. Aloysius, as cited in Architect Magazine, has a hyperbolic paraboloid roof. This is often used as an inexpensive solution to long-span roof requirements, such as in sports arenas. The roof at the church has elegant and fluid lines like those you might see in a fabric tent. The "tent" of St. Aloysius church is made from standing seam metal panels. It is a beautiful structure to view from inside and outside (convex and concave). ● In apparel, we also see hyperbolic paraboloids on two sides of a tricorn hat (pirate hat), and a nun's Wimple (hat). What We Learned We learned what hyperbolic paraboloids are, how to create them in various ways, and practical applications of them. We discovered the difference between a two-dimensional parabolic curve and a three-dimensional hyperbolic paraboloid. We found out the definitions of convex and concave in relation to hyperbolic paraboloids.
Speaker
Speaker biography is not available.

Spatial Tissue Differentiation in Bioprinted Organ Constructs

Tyler Wu (USA)

0
Bioprinting combines 3D printing with cell biology and materials science to create tissues and organs from scratch, which can then be used in transplantation and drug testing. Much like how a 3D printer deposits a plastic filament into a 3D structure, a bioprinter deposits a cell-laden bioink to form an organ structure. Upon fabrication, cells must differentiate into specific cell types for the organ to function. Current research focuses on differentiating stem cells into specific types of tissues. However, this approach overlooks that organs are not made of a single tissue type; they consist of a consortium of different tissues working together to complete a specific function. While knowledge of tissue differentiation can serve as a foundation for bioprinting, it is crucial to expand and apply this knowledge to the level of entire organs—to realize a future of readily accessible 3D printed organs, it is imperative to be able to control the spatial distribution of tissues. To achieve such a future, this project summarizes the different strategies used to direct spatial cell differentiation as well as important mechanical, chemical, and electrical bioink properties that can be manipulated. By reviewing current studies related to controlling cell differentiation in bioprinted constructs and evaluating the advantages and limitations of each technique, the aim is to identify shortcomings in current technology and to provide recommendations for areas of further focus. Novel methods are required to manipulate cells effectively, refine tissue organization, and control cell differentiation, and by regulating the distribution of specific cell types within an organ, it becomes feasible to fabricate organs with enhanced functionality.
Speaker
Speaker biography is not available.

Community Building: The Importance of STEAM

Sowmya Natarajan (Georgetown Day School, USA)

0
In 2022, I wrote an IEEE paper on my experience tutoring two young girls in math and the importance of women in STEAM. After mentoring them for 3 years and building from those experiences, this paper discusses my involvement in teaching and holding STEAM festivals with youth in Washington DC who were primarily African-American and members of the Navajo Nation in Farmington, New Mexico. This paper explores my lessons learned in tutoring two younger girls in math for three years. It then discusses how these lessons were applied in creating two major STEAM camps/festivals supporting minority communities in Washington DC and New Mexico. The paper finally explores the powers of the arts to build capacity and create a learning environment to support students in their educational journey in STEAM.
Speaker
Speaker biography is not available.

The Math Behind Machine Learning

Vas MV Grabarz (USA)

0
Systems of equations and matrices go hand-in-hand when representing data and its transformations. For example, each row of a matrix demonstrates a data point, whereas each column contains an attribute of the data. In machine learning, vectors can be readily used to represent observations of data. Vector operations are also a viable means of imitating neural networks. Math topics in artificial intelligence will be covered, and different examples of linear algebra concepts will be conducted in python. Various mathematical equations that are usually unnoticed will be revealed, showing the sheer importance of linear algebra in the realm of machine learning.
Speaker
Speaker biography is not available.

The Next Level of Video Game Cheating

Isaac Newell and Jacob F Hackman (Holy Ghost Prep, USA)

0
Anti-cheat systems have undergone significant advancements, driven by the escalating arms race between developers and cheaters. Riot Games' Vanguard stands as a prime example of this evolution, employing kernel-level monitoring to detect and prevent cheats in their popular game titles. However, as anti-cheat technology becomes more sophisticated, so do the methods of circumvention. AI-powered external cheats have emerged as a formidable challenge, utilizing machine learning algorithms, and the use of external hardware to adapt and evade detection mechanisms. These cheats leverage intricate patterns and behaviors to mimic legitimate player actions, making them harder to identify and mitigate. We aim to create a basic cheat for a video game, employing AI, and an external microcontroller. To achieve this, we will train an AI algorithm to recognize and locate targets within the game environment, such as enemy players. Once a target is identified, we will communicate information to the microcontroller. This microcontroller can turn this information into simulated movements indistinguishable from a real mouse, allowing the player's crosshair to automatically aim at the detected targets. Another possibility is to simulate mouse movements through a piece of software on the computer, although this also entails the side effect of an additional chance of detection. This combination of AI-powered target detection and mouse manipulation creates a cheat that can provide a significant advantage in gaming scenarios. Moreover, by not interacting with game memory, and using an external device to send mouse movements, such cheats can be near impossible to detect and counteract by even advanced anti-cheat systems.
Speaker
Speaker biography is not available.

The impact of sleep deprivation on cognitive function in jumping spiders

Anja Gatzke and Erin Kim (Princeton High School, USA)

0
Chronic sleep deprivation is known to be damaging to cognitive functions, specifically to memory consolidation in the hippocampus. However this research project aims to discover the immediate effects of sleep deprivation on cognitive function in jumping spiders. It is hypothesized that sleep deprivation will cause a decline in cognitive function as a result of the increase of beta amyloid protein plaques, due to dysfunction in breaking down amyloid precursors involved in development of nerve cells, leading to declines in memory (Blumberg et al., 2022). Jumping spiders were used as they have similar circadian rhythms to humans and were sleep deprived through light and sound distractions throughout the night. To test cognitive function two methods were used and reaction time was recorded. The first test involved simulating a predator in the jumping spider's habitat and the second was testing spatial memory and reasoning by taking the spiders out of the normal containers for five minutes. In test one the average reaction time increased from 2.72 seconds to 14.81 seconds after a night of sleep deprivation. Similar data was found in the second test with the average time taken to return to the web increased from 121.05 seconds in the control to 235.07 seconds after sleep deprivation. Overall, it was determined that sleep deprivation, even in small quantities, was harmful not only to the cognitive function of jumping spiders but to their development as well. The implications of this research serve as a means to determine how a single night of sleep deprivation impacts cognition to provide the field with more information on just how harmful sleep deprivation is in small quantities.
Speaker
Speaker biography is not available.

How much screen time should kids have?

Zuko A Ranganathan (Hart Magnet School, Stamford CT, USA)

0
These days, a big topic of discussion in many families is how much screen-time should the kids get, and how much should parents control their kids screen-time. This is quite a tricky question, because kids are quite attracted to gadgets, and while these gadgets can help the kids in their social and academic lives in various ways, they can also hurt their cognitive development. In this poster, I will talk about the pros and cons of screen-time for kids. I will explore what is the appropriate amount of screen-time for kids of different ages. Finally, I will give some tips for kids to use their gadgets in a fun, but safe, way.
Speaker
Speaker biography is not available.

K-12 Poster Session: Pull-Up Nets

Sanaa Jones (USA)

0
The first time I saw a pull-up net was when I started looking at topics for this conference. I watched a video of pull-up nets in motion and was amazed by how a flat 2D figure could be pulled together to become a 3D shape. For this project, I want to explore the world of pull-up nets! I want to start by creating pull-up nets for cubes, pyramids, and triangular and rectangular prisms. Next, I would like to analyze how many different nets can be created for a cube and other 3D shapes to see if there is a pattern for creating pull-up nets. Finally, I would like to create pull-up nets for the five platonic solids.
Speaker
Speaker biography is not available.

Da Vinci Bridge: Past, Present, and Future

Richard H Evans (USA)

0
In 1502, Leonardo da Vinci responded to a request to provide a bridge design that connected Istanbul with Galata. Even though his design was not selected, his bridge concept has become very popular. Many researchers and organizations have replicated Leonardo's design to determine its viability. Through my design, I will discuss the strength of such a bridge and whether this design should be considered in future bridge designs. I will build my version of a da Vinci bridge and demonstrate the strength of my design by placing objects on it. I will discuss why this bridge concept is able to sustain considerable weight. I will explore the decision made in 1502. Should they have selected da Vinci's bridge design to connect Istanbul with Galata? Should we consider elements of the da Vinci bridge design in future bridge designs? I will explore various answers to these questions,
Speaker
Speaker biography is not available.

Improving the C++ Experience with Transpilers

Stephen E Hellings (Holy Ghost Preparatory School, USA)

0
Identification of Problem: Language C++ is a popular programming language aimed at performant and extensible programming. Due to the amount of features, modern C++ is growing more complex as time goes on. Rationale: Although offering capabilities that cater towards advanced users, it remains increasingly complex and hard to interpret for those beginning to use it. Approach: A new programming dialect, compiled to C++ through a "transpiler", is based on another programming language and relies on an Abstract Syntax Tree (AST). It was tested among multiple persons, all of whom use C++ on a daily or frequent basis. Involved also with testing are multiple new programmers who do not know C++ or are beginning to learn it. Additional Information: The language the dialect is based off of, Python, provides a syntax more comfortable to beginner programmers. Extending the functionality of Python with the features of C++ in a basic syntax will provide a comfortable experience to new programmers who also seek to learn the concepts of C++. Results: All advanced "testers" who have carried out the experimental program have reported no issues and have stated it provides all features necessary for their use. All beginners used in the experiment's testing have reported that it flattens the learning curve of C++ and provides a comfortable programming experience. Additional Information: The dialect and transpiler "Kurakura" has a dedicated website "https://kurakura.firebirds.win/" in order to see a pre-release version available to the public. A private version available to internal members that is likely to be stable is periodically released to the public.
Speaker
Speaker biography is not available.

Artificial intelligence approach for predicting class I major histocompatibility complex epitope presentation and neo-epitope immunogenicity

Kathryn Jung (USA)

0
T cells help eliminate pathogens present in infected cells and help B cells make better and different kinds of antibodies to protect against extracellular microbes and toxic molecules. Because T cells cannot see the inside of cells to identify ones that ingested pathogens or are synthesizing viral or mutant proteins, antigen presentation systems evolved, displaying on the cell surface information about various antigens synthesized or ingested in cells. The systems provide a way to monitor major subcellular compartments where pathogens are present and report their presence to the appropriate T cells. Endogenously synthesized antigens in the cytosol of all cells are presented to CD8+ T cells as peptides bound to major histocompatibility complex (MHC) class I molecules, thereby allowing identification and elimination of infected cells or cancer cells by the CD8+ lymphocytes. Thus, identification of non-genetically encoded peptides, or neo-epitopes, eliciting an adaptive immune response is important to develop patient-specific cancer vaccines. However, experimental process of validating candidate neo-epitopes is very resource-intensive, and a large portion of candidates are found to be non-immunogenic, making the identification of successful neo-epitopes difficult and time-consuming. A recent study showed that the BigMHC method, composed of seven pan-allelic deep neural networks trained on peptide-MHC eluted ligand data from mass spectrometry assays and transfer learned on data from assays of antigen-specific immune response, significantly improves the prediction of epitope presentation on a test set of 45,409 MHC ligands among 900,592 random negatives compared to other four state-of-the-art classifiers. It also showed that after transfer learning on immunogenicity data, the precision of BigMHC is greater than several other state-of-the-art models in identifying immunogenic neo-epitopes, making BigMHC effective in clinical settings. I noticed that there is a multi-allelic dataset that comes from MHC-Flurry 2.0 consisting of MHC Class I peptides each with a bag of six alleles used in BigMHC method; in the single-allelic data, each peptide only consists of one allele instead of multi-alleles. However, even in the single-allelic data duplicates are possible: there may be two of the exact same peptides, but one belongs to one allele while the other belongs to a different allele. I set out examining such duplicates with my custom code and found that 3.142% of the single-allelic data are duplicates, raising the possibility that BigMHC method's test results is unaffected by the duplicates. As expected, there was no notable differences based on my trained models. The examination and result raise another possibility that implementing multiple instance learning (MIL) may be advantageous in immunogenicity prediction because it considers multiple MHC alleles associated with a given peptide, as observed in the multi-allelic dataset whereas in single-instance learning, each peptide is associated with a single label (in this case, whether it elicits an active immune response), which may not fully capture the complexity of MHC-peptide interactions due to the high level of polymorphism in MHC class I molecules. If the approach succeeds, MIL will help further enhance the accuracy and reliability of BigMHC method potentially more beneficial in clinical settings.
Speaker
Speaker biography is not available.

A New Statistical Measure of NFL Talent

Ezra Sol Lerman (USA)

0
What does it take to create a statistical approach for measuring the relative performance of pro football athletes? What can be improved upon from the latest advances in football statistics and analytics? This project will be focused on developing a new way to compare and contrast the on-field performances in the NFL to help differentiate between levels of players. The goal is a method of interpreting statistics that better shows how valuable players are to their teams, and their relative performance compared to other players. The approach will blend ideas from already-existing advanced analytics, along with new algorithms that incorporate even more facets of the game. I plan to study various articles about how some professional statisticians have developed their own advanced algorithms to get an idea of the process behind creating such models and how they can be improved. Ultimately, I may focus on one particular position for this project, but over time I would want to expand the research to encompass all positions in the game. It would exciting if the techniques developed could eventually be used by teams to guide their draft and free agency decisions, combining statistics across the entire team to predict best fit and elevating the performance of the entire team. I will also explore how artificial intelligence algorithms can enhance the accuracy and predictability.
Speaker
Speaker biography is not available.

Enhanced Low-Power, Low-cost, and Very High Accuracy Smart Parking Solution for Urban Areas

Vivek Pragada (Central Bucks South High School, USA)

0
United Nations projects that 70% of the global population will be living in urban areas by 2050. This will further exacerbate the already challenging issue of urban parking, where it is currently estimated that 45% of total traffic congestion is due to drivers looking for parking. In our prior work, we have proposed a cross-sensor-based urban parking solution consisting of a smart parking server (SPS) and smart parking units (SPU). Each SPU utilizes a magnetometer sensor and an LPWA connectivity module. This was accomplished by configuring multiple thresholds in each of the SPUs, such as occupancy threshold, adjacency threshold, opposite threshold, etc. corresponding to various automobile makes/models. While these thresholds helped significantly in the accurate determination of parking spot occupancy, to be able to support various automobile makes and models including electrical vehicles (EVs) that tend to have lower ferrous content in their chassis, based on our further analysis we have come to realize that these configuring various thresholds accurately is challenging. The configuration of appropriate thresholds for occupancy and adjacent thresholds is critical in achieving high accuracy. To reduce the complexity of being able to determine and configure appropriate thresholds that could be sensitive to various automobile makes and models, and to be able to handle all practical parking events - we have developed an enhanced framework that demonstrated higher accuracy while dramatically reducing the sensitivity to different automobile makes and models incl. EVs. In this enhanced approach, each SPU is configured with only a single threshold T. T is chosen to be less than the change that would be caused by a vehicle with the lowest ferrous content being parked in an adjacent spot. The interference from a car parking in an adjacent spot is much greater than most environmental fluctuations, allowing for T to be between these two values, just high enough to capture anything that could correspond to a parking event. The built-in redundancy of the system will enable the other SPUs around to "correct" it as they would not detect any change greater than T. Whenever an SPU's reading changes by some Δx > T within a specified duration Δt, it sends Δx to the SPS. All computation and deduction can be done server-side, as will be illustrated with various caseworks, enabling each SPU to be extremely simple. Because no additional processing is needed at the SPU, thereby reducing power consumption and lowering the cost, the proposed approach is much more efficient than current methods involving onboard filtering, processing, and sensor-level determination. The built-in redundancy of the system can help lessen the effects of an SPU malfunctioning. This enhanced framework also enables the magnetometer to be extremely low power because it only requires a minimal sampling rate to check every Δt seconds. Rather than analyzing a complex function for each parking event, it only needs to look at the overall change (displacement) in magnitude over duration Δt. This greatly reduces the power required and helps prevent false readings from possible momentary spikes in magnetic flux.
Speaker
Speaker biography is not available.

Detecting Elementary Particles With a Homemade Cloud Chamber

Judah Lerman (Princeton Middle School, USA)

0
It is amazing that the universe contains elementary particles that are too small to see, not even with the most powerful microscopes. But how do we know such particles really exist and how can we prove this at home without expensive science equipment? I've seen a cloud chamber in a science museum that illuminated the pathway of tiny particles bombarding the earth from outer space. For this project, I will explore how to build a cloud chamber at home to detect the presence of elementary particles and record the evidence. What types of particles can be detected? What makes a quality cloud chamber, and how does a homemade cloud chamber compare to professional ones at museums and science labs? What are some of the applications of cloud chambers, and how do they help us understand the universe we live in?
Speaker
Speaker biography is not available.

Eco-Friendly Remediation of PFOA Contamination using BTs-ZVI (Banana Peel, Tapioca - Zero Valent Iron)

Emily Jooah Lee (The Lawrenceville School, USA)

0
Perfluoroalkyl and polyfluoroalkyl substances (PFASs) have become a significant environmental concern due to their widespread use and persistence. This study addresses the emerging issue of PFAS contamination, focusing on perfluorooctanoic acid (PFOA), a particularly troublesome compound. PFASs are found not only in drinking water, where they adsorb onto microplastics, but also in various cosmetic products, presenting a multifaceted exposure risk. Despite ongoing regulatory developments by agencies such as the U.S. Environmental Protection Agency (USEPA), the prevalence of PFASs, especially PFOA, in drinking water remains alarming. New Jersey, in particular, stands out as a hotspot for contamination, affecting over 500,000 individuals. In response to this critical issue, our research aims to propose a sustainable and efficient method for the removal of PFOA from drinking water. Traditional treatment technologies have proven ineffective against PFAS removal, necessitating the exploration of advanced oxidation processes. PFOA, classified as a "forever chemical" due to its persistent nature, poses a unique challenge for degradation. Previous attempts using microbial species demonstrated limited success, highlighting the need for alternative methods. The research focuses on the application of advanced oxidation processes, specifically UV irradiation under varying conditions, as a promising avenue for PFOA removal. The study employs a systematic approach to optimize the efficiency of UV-based oxidation, considering factors such as irradiation intensity, duration, and environmental conditions. Preliminary findings suggest the potential of this method to address the challenges posed by PFOA persistence and resistance to conventional treatment strategies. This research contributes to the growing body of knowledge on PFAS removal techniques and underscores the importance of developing sustainable solutions to combat emerging environmental contaminants. As the demand for effective treatment technologies rises, our findings aim to inform future strategies for mitigating the impact of PFAS contamination on drinking water quality.
Speaker
Speaker biography is not available.

Analyzing the Influence of Low-Frequency Induced Vibrations on the Tensile Strength of 3D Printed Materials

Ayati Vyas (San Jose State University, USA); Shreyas Ravada (Monta Vista High School, USA); Sohail Zaidi (San Jose State University, USA)

0
3D printing has evolved into a mature technology finding widespread industrial applications. The predominant method, fused deposition modeling (FDM), involves layer-by-layer deposition of melted thermoplastic to achieve the desired component shape. Common printing materials include polylactic acid (PLA), acrylonitrile butadiene styrene (ABS), polyethylene terephthalate glycol (PETG), and thermoplastic polyurethane (TPU). While these materials possess excellent thermal and mechanical properties for producing high-quality specimens, there is still room for improvement in both efficiency and overall strength. Experiments indicate that minimizing layer thickness and raster width enhances the tensile strength of printed material. Additionally, 3D printing is susceptible to external vibrations leading to failed prints with undesirable wavy patterns, known as "ringing". In contrast, a study in 2018 demonstrated a three-order increase in material flow rate with high-amplitude ultrasonic vibrations to the ejecting nozzle. The objective of the current study is to validate the concept that deliberately induced vibrations during 3D printing will impact tensile strength. The proposed hypothesis suggests that low-frequency induced vibrations will decrease porosity, consequently increasing the overall tensile strength of the material. To conduct the research, a Tronxy X5SA 3D printer was utilized, with its printing stage modified to incorporate a vibrating mechanism. An Ocity Vibration Rumble Motor (B07FL7HQ7Y) was mounted on the stage holding the ejecting nozzle and operated between 2000-3000 rpm at 3-6 V. Vibrating frequencies were measured between 3 to 6 Hz. Dog bone specimens conforming to the ASTM Type I standard, were printed from PLA and ABS plastic with infill levels of 100%, 75%, 50%, and 30%. Specimens were printed with and without vibrations, using variations in infill level And vibrations as the main parameters to evaluate the impact on tensile strength. To accurately determine the porosity of each specimen, the Archimedes approach was adopted, submerging specimens in water and measuring the displaced liquid volume along with the weight of the dry and wet specimens. Preliminary experimental results support our hypothesis. It was found that, for both PLA and BAS materials, increased vibration frequency (from 3 to 5 Hz) reduced porosity by 3-4% for all infill levels except 100% fill case. Dog bone specimens were tested for tensile strength. Information on each specimen including area, maximum load at rupture, and strain percentage at the break point, was collected. Results indicate that for a 100% infill specimen with a 3 Hz induced frequency, there was a 17.20% increase in maximum stress observed. For a 60% infill specimen, the corresponding increase was about 15.7%. Further analysis is under progress, and the final presentation will include in-depth results for this investigation.
Speaker
Speaker biography is not available.

Unlocking the Potential: Cloud-Based IBM Platform's role in Advancing Machine Learning Models for Early Heart Disease Detection

Advika Arya (American High School, USA); Sohail Zaidi (San Jose State University, USA)

0
According to the World Health Organization, cardiovascular issues stand as the leading cause of death. In recent years, an increasing number of individuals have been affected by heart problems, leading to a surge in heart disease. The conventional method for diagnosing heart involves coronary angioplasty, a precise yet an invasive surgical procedure. Our hypothesis suggests that the integration of AI/Machine Learning techniques can enhance heart disease predictions, improving healthcare by detecting an individual's risk without resorting to surgery. This study utilized data from UC Irvine repository incorporating 13 features: age, sex, chest pain type, resting blood pressure, serum cholesterol, fasting blood pressure, resting electrocardiographic results, maximum heart rates, oldpeak, exercise induced angina, slope of peak exercise segment, number of major vessels, and thal. Analysis involved 303 patients from several hospitals leveraging the IBM platform to deploy multiple algorithms to develop predictive models. Snap logistic regression, extra trees classifier, and logistic regression with and without enhancements were employed. Achieving high accuracy levels, all above 80 percent, each algorithm highlighted the percentage contribution of significant features to model predictions. For instance, chest pain (100%), thal (97%), exercise induced angina (92%), number of major vessels (87%), oldpeak (67%), maximum heart rate (62%), and age (47%) were identified as pivotal in the extra trees classifier model.Strong correlations among various features in predicting heart disease with high accuracy were observed across all algorithms. The study also explores variations in results by changing the number of folds in the data, presenting ROC curves, F1 score, and confusion matrices for comparative analysis. A comprehensive discussion on machine learning results, including strategies for improving accuracy, will be presented. The methodology employed on the IBM Watson platform will be detailed, emphasizing the advantages of utilizing cloud-based platforms for developing AI/ML based predictive models.
Speaker
Speaker biography is not available.

Evaluation of Inter-Process Communications in System-on-Chip Computers by FAST-DDS

Connor Wu (Marriotts Ridge High School & Johns Hopkins University Applied Physics Laboratory, USA)

0
System-on-chip (SOC) computers enable seamless Interprocess Communication (IPC), facilitating the Internet of Things (IoT) to exchange data across devices like smartphones, security systems, automotive systems, and digital cameras. This technology streamlines connections between applications, allowing efficient data exchange. Despite its advantages, occasional latency spikes within these systems can delay data reception. Consequently, evaluating IPC on SOC computers becomes crucial to understanding the correlation between the chosen transport mechanisms and latency values. In this poster, I present my findings on how the transport layer used affects latency. These latency values were collected by writing a C++ application with Fast-DDS as the networking library. A Python script using matplotlib generates a latency vs transport graph. The program would work by starting the subscriber. The subscriber reads a configuration value to determine the transport to use, frequency to start at, the amount to increment frequency, frequency to end at, and the number of samples to collect for each frequency. After the publisher finishes initializing, it would repeat the process of reading the configuration file. The publisher would send the number of samples located in the configuration file. After collecting the data, a Python script generates the graph to compare the transport layer used and their latency values. At the current stage of this project, publishers and subscribers can exchange data with one another. In the future, we plan to expand the application to receive information from various sensors. In the future, we plan to embark on further investigations aimed at Quality of Service Exploration. There are many Quality of Service parameters to configure within the Fast-DDS library to enhance or hinder reliability and improve reliability, which provides users more control over communication characteristics. We hope users will find it easy to extend this project and use it for real-time analytics.
Speaker
Speaker biography is not available.

Development of a Heatsink with embedded thermosyphons for Passive Cooling of High-Power LED Panels

Ayush Guha (Dublin High School, USA); Ayaan M Raza (Bellarmine College Preparatory, USA); Sohail Zaidi (San Jose State University, USA)

0
High-energy LED panels find diverse applications, ranging from indoor to space agriculture. While LED panels are generally efficient, they tend to produce significant heat, impacting their effectiveness and posing a risk of permanent damage. Active cooling methods, such as fans, not only consume excessive energy but are prone to failure, potentially reducing the panel's overall lifespan. This research explores a traditional passive cooling technique that integrates a heatsink with embedded thermosyphons operating at low pressure. The thermosyphons evaporate the fluid, which then condenses at the condenser end, releasing heat to the environment. The condensed liquid returns to the evaporator section due to gravity. In this study, an effort is made to combine these two passive techniques by designing a heat sink with embedded thermosyphons. To incorporate thermosyphons within each 10mm X 10mm fin, special design arrangements were implemented. A total of 144 rectangular pin-fins, each with a 3mm embedded hole, were attached to a vapor chamber filled with R134a refrigerant at a low pressure. At elevated temperatures, the fluid activates the thermosyphon process, effectively transferring heat away from the LED panel. The lower vapor chamber is sealed, and the LED panel is affixed beneath it. To enhance the heat conduction and minimize air pockets between surfaces, thermal paste is applied. The temperature data is collected using 16 k-type thermocouples attached to the tips and bases of 8 different pins around the heat sink. The LED panel is turned on, and the temperature readings are recorded through a multiplexer PCB connected to a Raspberry Pi. Initially, the temperature data for the LED panel surface was recorded with the cooling fans, which were later removed to establish baseline temperature data. Experimental results reveal that without cooling fans, the LED panel's surface temperature reached 120oC, while with the cooling fans, it reduced to approximately 40oC. The LED panel was attached to the bottom surface of the heat sink to record temperatures along with the heatsink and thermosyphons embedded fins. The data shows a percentage change along the fins ranging from 6.3% to 12.8%, depending on the fin's location along the periphery of the heatsink. Theoretical temperatures along the solid fins were modeled using MatLab, indicating that the difference between the top and bottom of these fins, and the experimental difference for the thermosyphon fins, was over 15 times lower than the theoretical values with solid pins. This, coupled with the temperature variation along the fins, suggests that the thermosyphon process within the vapor chamber was activated at higher temperatures. Efficient cooling is achieved by transferring heat from the base of the LED panel to the condenser section of the thermosyphon. The LED surface temperature with thermosyphons in operation measured around 42 degrees C closely aligning with the target temperature achieved with the cooling fans. These experiments were repeated for accuracy, and the comprehensive results will be presented at the upcoming conference.
Speaker
Speaker biography is not available.

STEM Approach to enhance Robot-Human interaction through AI Large Language Models and Reinforcement Learning

Siddhartha Shibi (Washington High School & Intelliscience Training Institute, USA); Sohail Zaidi (San Jose State University, USA)

1
Humanoid Robots with their limitless capabilities have revolutionized the world. Their applications range from household assistance to advertising. As these technologies age however, the use of their sensors, motors, cameras, all become outdated; making previous humanoids a thing of the past. This project takes a STEM approach towards enhancing these robots by tackling the most crucial issue that such humanoids face; their adequacy in human-robot interactions. This study explores the promise of integrating LLMs (Large Language Models)—such as Google PaLM2 and ChatGPT—to supplement the capabilities of such robots; as well as bringing CoT (Chain of Thought) . The subject of this project is the humanoid robot Pepper, by Softbank Robotics, a popular robot designed to interact with humans; however, due to its weak natural language processing (NLP) capabilities, it struggles to adequately articulate responses in human-robot conversation. For instance, the robot was capable of easily listing responses to simple questions such as, "What is your name?", or "What are you", yet struggled with providing adequate responses to queries such as, "Who is the president of the United States", or "When is the next World Cup?". By AI/ML LLM integration, such questions were handled by using much improved LLMs in place of the previous built-in responses the robot had. This demonstration has been shown in the video that is uploaded at: https://youtu.be/hF7aRlQmnqs?feature=shared. Our approach targeted the main weak points of the robot; it's ability to provide responses to asked questions, and remembering prior questions/conversation. By intercepting the robot's own NLP Dialog Module, the asked prompt can be connected through a chatAdapter, bringing conversations to a chat database for context as well as LLM of choice. This approach, implemented through use of Android Studio to create an appropriate application for their procedure, addresses the contextual-based reasoning by pulling from the chat database as well as provides adequate responses limited only by the AI/ML model of choice. This project involved integrating ChatGPT/PaLM2 into Pepper's existing system to enable generation of more natural and engaging responses. In addition to the pre-existing development in bringing artificial intelligence into these humanoid robots, further work has been in the process; the aim being to develop a way for the robot to simultaneously extract other situational data from conversation such as facial and tonal expressions, bringing human feedback in order for responses to be further fine-tuned. Aside from the work-in-progress development of integrating RLHF (Reinforcement Learning with Human Feedback), the effectiveness of the aforementioned approach was further evaluated through a user study, comparing it with and without integration. The results indicated that integrating LLM/s into the robot's NLP system significantly improved its ability to generate more coherent responses, leading to more natural human-robot interactions. Overall, this presentation will demonstrate the potential of using LLMs to enhance the NLP capability of human robots like Pepper. It's believed that the proposed approach can pave the way for developing more intelligent human-robot interactions in the future.
Speaker
Speaker biography is not available.

Integrating Machine Learning Techniques to Improve Pneumonia Diagnostics by Analyzing Chest X-ray Scans

Manasvi Pinnaka (IntelliScience Institute, USA); Sohail Zaidi (San Jose State University, USA)

0
Pneumonia is a respiratory infection that causes over a million hospitalizations and 50,000 deaths every year, making it the fourth most common cause of mortality overall. Pneumonia diagnostics are complicated as physicians need to rely first on chest X-rays, which are followed by other clinical tests including those based on blood and sputum samples to confirm pneumonia. The recent COVID-19 pandemic has only increased the number of cases of this disease with the virus attacking airways and gas exchange regions of the lungs, leading to these prominent respiratory infections. Now, large amounts of data are available that can aid with the diagnostic capabilities for this disease. Since this enormous quantity of data can only be efficiently evaluated with the use of computers and statistical techniques, automation of the diagnostic process for pneumonia is extremely beneficial. Artificial intelligence has provided us with the ability to transition from traditional diagnostic tools to a more machine-driven version that can significantly improve diagnoses of pneumonia in terms of cost, time, and accuracy. Different radiologists can interpret chest X-rays in different ways which makes this diagnostic method extremely subjective. The issue of subjectivity emerges in cases where advanced machine learning techniques are employed to develop predictive models that are based on chest X-ray examinations. The objective of the current work is to explore the impact of subjectivity on the accuracies of these machine learning models. The chest X-ray images were obtained from the RSNA International COVID-19 Open Radiology Database (RICORD). This database consisted of approximately 1,000 chest X-rays from 361 patients at least 18 years of age who tested positive for COVID-19. Each X-ray image was evaluated by three radiologists based on appearance (typical, indeterminate, atypical, or negative for pneumonia) and airspace disease grading (mild, moderate, or severe). In the current work, the convolutional neural network (CNN) algorithm was employed on four different variations of the dataset described above - the diagnoses of radiologist #1, radiologist #2, and radiologist #3 as well as a three-timed-duplicated set including each of the three diagnoses based on a single chest X-ray scan as a separate entry. The same CNN model achieved training accuracies of 43.71%, 20.54%, 20.56%, and 27.83% and testing accuracies of 44.39%, 18.93%, 20.00%, and 27.93% respectively. As expected, the impact of subjectivity can be identified in terms of low model accuracies. Poor to moderate model performance across all four classification tasks indicates the problem that non-objective evaluations of chest X-rays, specifically variations in the diagnostic analysis of ten similar scenarios, play in medical decisions. Machine learning has to be integrated with doctors'/radiologists' opinions, which vary based on their expertise and experience-based perspective, for the optimal balance between accuracy and efficiency in health-based assessments of COVID-19 pneumonia. The full set of results and their interpretation will be included in the final presentation.
Speaker
Speaker biography is not available.

Adaptability of IBM Watson Cloud Platform to Develop Machine Learning Models for Predicting Students' Academic Stress

Syed M Kazmi (Rutgers University, USA); Alisha Kazmi (Notre Dame San Jose, USA); Anvikh Arava (John Champe, USA)

0
In recent years, machine learning (ML) has undergone a significant transformation, largely driven by the challenges inherent in traditional model development methods. These approaches, often dependent on expert knowledge in programming languages, algorithms, and statistical techniques, are time-consuming and demand a high level of skill to effectively manipulate parametric variations and their impact on model accuracies. This study offers a comprehensive analysis of the adaptability of the IBM Watson Cloud Platform in developing ML models, addressing many of these challenges. Machine learning, a prime example of the STEM approach, involves training algorithms to learn and make predictions or decisions from data. Traditionally complex and skill-intensive, this process is simplified through AI platforms like IBM Watson. Our research explores the functionality of the IBM platform, emphasizing its flexibility in providing various split ratio variations, algorithm choices, and K-fold variations, and how these features influence model performance. To assess the platform's efficacy, we conducted a case study analyzing academic stress among students. Data was collected from two primary sources. The first set of data was obtained from a university in Pakistan immediately after the COVID peak by distributing a questionnaire among students. The aim was to gather information on various relevant parameters grouped into four sections: "General Information", "Perceived Stress Scale", "Cognitive Assessment", and "Social Dependency". The Watson ML platform was used to develop a model under the "supervised learning" option, incorporating various algorithms including Extra Trees Classifier and Random Forest Classifier. The machine proposed two best algorithms including Random Forest Classifier that gave an accuracy of 66.4% in which feature enhancements such as hyperparameter Optimization and feature engineering. Results indicate that among all impacting parameters, cognitive performance, self-study hours, and the number of class absentees played a dominant role in predicting a student's average score. The impact of parametric variations like split ratios and K-fold distributions was also examined, showing that the model accuracies could be optimized for the highest values by carefully selecting the split ratio with an associated value of k. Study 1 research is further expanding to analyze more data on students' academic performance. New data under investigation is borrowed from Kaggle using passive and automatic sensing data from the phones of a class of 48 Dartmouth students over a 10-week term to assess their mental health (depression, loneliness, stress), academic performance (term GPA and cumulative GPA) and behavioral trends (sleep, visits to the gym). This data is currently being analyzed with new models indicating high accuracy, and results are being compared with the published papers on this data. The final results will be presented at the upcoming conference. In the final presentation, it will be argued that the IBM Watson Cloud Platform is a robust tool that simplifies machine learning model development, making it more accessible and less reliant on deep technical expertise.
Speaker
Speaker biography is not available.

Robot Motion Planning with Complementarity Constraints: When is it easy?

Ishita Banerjee (USA); Nilanjan Chakraborty (Stony Brook University, USA)

0
This research is on robotics motion planning, where the goal is to find a path for a robot from a start to a goal configuration without hitting obstacles in the environment. An instance of a robot motion planning problem consists of a geometric model of an environment with obstacles, a model of a robot, and its initial and goal configurations. Computationally, robot motion planning is known to be NP-hard (more accurately, PSPACE-hard), which means that there are instances of the motion planning problem where it is computationally very expensive to compute a feasible or collision-free path, even if there exists one. Practically, this means that there are motion planning problems that are unsolvable in a reasonable time. The purpose of my research project is to understand a related question: Can we characterize the set of motion planning instances where the motion planning problem is solvable in polynomial time? Understanding this question will help us devise more reliable robotic systems and help us understand the performance of robotic systems in certain deployed scenarios such as in a home environment. It may also allow the robot to reason about its environment and understand how some of the obstacles may be rearranged, if possible, to obtain a feasible motion plan. The question above is quite challenging since the question is also related to the underlying motion planning algorithm that is being used. Within the context of this overarching problem, my goal is to understand the above question for point holonomic robots moving in a 2D or 3D environment. Up to now, I have considered the obstacles to be circular non-overlapping obstacles. We can prove that in this environment all motion planning problems are easy, i.e., it is possible to solve the motion planning problem in polynomial time. The computational model of this problem was created using a discrete-time kinematic motion model of the robot and position-level complementarity constraint. The collision model was created for this project at the kinematic level using a complementarity constraint. For collision avoidance, we applied a velocity to the robot to bring the normal component of the robot's velocity to zero based on the complementarity constraint for collision avoidance. The environment creation and the simulation of this movement of the robot using the mathematical model and complementarity constraint has been done in Python where it has been proved that the model works for any complex environment with non-overlapping circular obstacles. After proving our theory with circular obstacles in the 2D environment the same implementation was extended to prove our model in a 3D environment with Spherical obstacles. In our future work we plan to study the problem of characterizing computationally efficient motion planning instances using polygonal obstacles.
Speaker
Speaker biography is not available.

Photoredox-Catalyzed SH2 Cross-Coupling of Alkyl Chlorides Via Silyl-Radical Mediated Chlorine Atom Abstraction

Ashlena M Brown (Princeton University Laboratory Learning Program); Andria L Pace (Princeton University, USA); David W.C. MacMillan (Principal Investigator, USA)

0
C(sp3)–Cl bond activation has incredible potential to be used in the formation of C(sp3)–C(sp3)-rich compounds, which are highly desirable in the pharmaceutical field. However, cross-coupling of alkyl chlorides to produce C(sp3)–C(sp3) bonds has not yet been achieved due to the inherent limitations of the C(sp3)–Cl bond. Despite this, alkyl chloride starting materials are commercially abundant and accessible. Thus, being able to generate radicals from alkyl chlorides that form quaternary products has a great possibility to impact organic reactions and drug synthesis. In this paper, the bimolecular homolytic substitution reaction (SH2) between primary and tertiary alkyl chlorides is proposed, key bond formations are shown, and yields are listed. BTMG, Fe(OEP)Cl, [Ir(F(Me)ppy)2dtbbpy]PF6, and (TMS)3SiNHAdm were used alongside various primary chlorides and tertiary chlorides in a photoreactor using blue light. Data was analyzed using UPLC, NMR, and liquid chromatography. The highest yield of desired cross-coupled product was at 67% where benzyl chloride was the limiting reagent. The reaction was also achieved using other primary chlorides, and the reaction scope and optimization have significant potential to be further researched.
Speaker
Speaker biography is not available.

Plasma-Water Interaction: Measuring RONS to Investigate the Plasma-Wound Interaction Process

Sharon Mathew (Archbishop Mitty High School & San Jose State University, USA); Sonya Sar (BASIS Independent Silicon Valley, USA); Sohail Zaidi (San Jose State University, USA)

0
In this study, the plasma-water interaction phenomenon was investigated. Non-equilibrium plasma is a state of plasma where the electrons are much hotter than the heavier ions and neutral atoms. Despite the high energy of the electrons, the overall temperature of the plasma remains relatively low, near to room temperature. This unique characteristic enables the use of non-equilibrium plasma in sensitive medical applications, such as wound healing and sterilization, benefitting millions of patients. However, the interaction of plasma with wounds is complex, involving chemical reactions between plasma radicals and water present in the wound, and necessitates further understanding. When the plasma jet, entering atmospheric air, interacts with water in a wound, it generates Reactive Oxygen and Nitrogen Species (RONS), crucial for wound healing. To optimize this process, it is important to investigate how different RONS vary under different plasma exposure conditions. This study aims to measure the RONS concentration generated by plasma in water. Experiments were conducted on plasma-water interaction, analyzing water samples with and without plasma exposure using a spectrophotometer (Shimadzu, 1900 Series). For this purpose, a special experimental rig was designed and an experimental setup was created. A Dielectric Barrier Discharge (DBD) plasma torch, operating at 10-12 kV/30-40 kHz with helium at 10 SLPM was employed to generate a plasma jet measuring about 20-30 mm in length. The input power was measured with two 1000:1 voltage probes, ranged from 10 mW and 20 mW, depending on the operating conditions. Special arrangements allowed controlled exposure of DI water to the incident plasma. In addition, the plasma exposure time for all samples was precisely regulated. Initial experiments revealed that a 30-minute exposure reduced the water's pH value by 54%, indicating acidity and the formation of RONS in the plasma-activated water (PAW). Additionally, a notable 220% increase in absorption peak was observed as the exposure duration was increased from 5 to 10 minutes suggesting higher concentrations of RONS. The current study is progressing to explore how varying plasma exposure times affect absorption curves obtained in spectroscopy. To quantify the concentration of various molecular species, calibration curves are being established using standard sets of samples for individual species, including NO3- and NO2-. Preliminary results have been obtained and are undergoing reconfirmation and analysis. Further findings will be presented at the upcoming conference.
Speaker
Speaker biography is not available.

Analyzing DBD Plasma under Varied Operating Conditions: Implications in Accelerated Wound Healing

Srida Aliminati and Aryan Tummala (BASIS Independent Silicon Valley, USA); Sohail Zaidi (San Jose State University, USA)

0
Wound healing process is hindered by deprivation of oxygen at the wound site. Few non-intrusive therapeutic techniques are available that include hyperbaric oxygen therapy (HBOT) and Topical Oxygen Therapy (TOT). In both cases, patients are exposed to oxygen to elevate the oxygen level at the wound site. In recent years, Dielectric barrier discharge (DBD) plasma techniques have emerged as an effective non-intrusive therapy for accelerated wound healing. Recent studies show that plasma contains reactive oxygen and nitrogen species that may help the wound healing process by means of microcirculation and oxygenated hemoglobin. While underscoring the pivotal role of oxygen and its associated radicals in accelerating all phases of wound healing, several limitations have become apparent. It has been demonstrated that only an optimal amount of oxygen is crucial for an efficient healing process, as both hypoxia and hyperoxia will impede the healing trajectory. In maintaining the delicate balance, controlled manipulation of oxygen radicals is essential, necessitating additional studies to provide a quantitative understanding. In this work we are investigating how small addition of oxygen can impact the species in the plasma. The monitoring of these species will assist us to optimize the required oxygen concentrations in the plasma exposing the wound surface. It is being achieved by looking at the emission spectrum of the plasma, observing the relative changes in various plasma emission lines at various plasma operating conditions and at various oxygen amounts added to the main plasma flow. An Ocean Optics (HR4000CG-UV-NIR) spectrometer was used to capture the emission spectrum. When introducing oxygen gas to helium plasma at various concentrations and voltages, distinct variations in the emission spectrum became apparent. In the absence of oxygen, prominent atomic helium lines at 706 nm, 655 nm, 667 nm, and 727 nm were observed. Additionally, a few nitrogen lines were observed, potentially originating from atmospheric air entrained into the plasma jet. The addition of oxygen introduced two prominent oxygen lines (776 nm and 844 nm) into the spectrum, leading to a notable decrease in the atomic helium lines. The addition of nitrogen, on the other hand, led to the appearance of prominent nitrogen lines, predominantly in the second positive nitrogen system. This study examines changes in the helium emission spectrum based at various flow rates of added nitrogen and oxygen. In each case several plasma input voltages ranging from 7kV to 13 kV (40-50 kHz) were employed to assess their impact on plasma characteristics. To investigate the influence of added oxygen on the bacteria (E. Coli), bacterial colonies were exposed to plasma both with and without oxygen. The colonies were subsequently counted in each case. A notable reduction in bacterial colonies was observed when oxygen was included in the helium plasma. The poster will provide comprehensive details regarding the experimental hardware and software utilized in this study. Additionally, it will summarize experimental results related to bacteria.
Speaker
Speaker biography is not available.

Setting up an Economical Testing Facility for Genome Sequencing of Chrysaora plocamia and Human Saliva

Deshna Shekar (Evergreen Valley High School, USA); Indeever Madireddy (USA); Prasun Datta (Tulane University, USA); Sohail Zaidi (San Jose State University, USA)

0
The process of genome sequence has become an important way to identify an organism's biology. Analyzing organisms' genomes provides key insight into understanding genetic information and variation between organisms, as well as the heritability of mental and physical illnesses in animals and humans. Over the last decade, genome sequencing has become significantly more practical to perform, especially with the development of third-generation sequencing technology and new techniques in gene analysis. The advent of Nanopore technology, with long-read sequencing and real-time analysis of data has made sequencing more cost efficient and feasible. Intelliscience Institute, in collaboration with San Jose State University, has set up a fully furnished and economical laboratory capable of sequencing genomes. Recently, we successfully sequenced the genome of the Chrysaora plocamia, the South American Sea Nettle Jellyfish. The objective of this work was to sequence a novel marine organism and establish an affordable research laboratory capable of exploring genomics. Jellyfish are essential in marine ecosystems and the study of their genomes can reveal new medicinal, evolutionary, and ecological information. Using Nanopore technology and equipment such as a MinION Mk1B sequencer, thermal cycler, and spectrophotometer, we assembled a high-quality and highly contiguous genome for Chrysaora plocamia. A total of 2.9 million reads totaling 7.3 GB of sequencing data was collected from a single R10.4.1 flow cell, providing 34x coverage of the jellyfish's haploid genome. Additionally, annotation of the genome using online databases of known venom genes helped us identify 112 putative venom genes that have diverse toxin function, which could have potential medicinal use in the future. This research is still in progress and recent results are being analyzed. In our current project, we are investigating human saliva. Human saliva contains proteins and enzymes other than water, which are essential for the maintenance of oral hygiene. In addition, saliva also contains diverse microbial species that maintain gum and oral health. Poor oral hygiene can lead to changes in oral microbiome, leading to the growth of bad bacteria that can promote oral cavities and plaque deposition. Poor oral health is directly associated with an increased risk of systemic disease, such as diabetes and obesity. Recent studies revealed that saliva is highly enriched with human DNA but non-human contaminating DNA can confound whole genome sequencing results. Current study is investigating this limitation and is also evaluating the saliva collecting methods that may improve the genome sequencing results. Further details of this research along with the important experimental steps involved in saliva genome sequencing will be included in the final presentation. Our poster will also include the details on the development of the genome lab and various protocols that were developed in our two projects described above.
Speaker
Speaker biography is not available.

Using Artificial Intelligence (AI) and Machine Learning (ML) for Predicting Credit Card Approvals

Lori D Coombs (NASA & WWCM, USA); Layla M Coombs, Victoria G Coombs and Amanda J Coombs (Home Instruction, USA)

0
AUTHORS: Lori D. Coombs, Layla Coombs, Victoria Coombs, & Amanda Coombs. Our Advisor Associate Professor Lori D. Coombs, MBA, MSE. Our project is sponsored by a Director of WWCM Academy, Don B. Coombs, MBA. Our goal is to build a predicting credit card approvals system to help lenders. The team incorporates steps to design, analyze data, build a predictive model, test & deploy. From a cybersecurity perspective, the team will pay attention to the data concerns in AI and ML with respect to training AI. This project aligns with the NIST's framework to conduct research to advance trustworthy AI technologies and understand their capabilities and limitations. The results will help the team better understand the predictive analysis process and support future opportunities for similar projects. INTRO: Our team chose to research how artificial AI and ML can be used to predict credit card approvals in an efficient manner. Project start-up involves deciding which programming application to use and obtaining a large data set to analyze. At the end of the project, we aim to be able to synthesize and train open-source code, loan data, and computational output to render credit card approval predictions. BACKGROUND: Our goal is to develop secure code to support lenders with the credit card approval process. Our Advisor is tasked to provide guidance with computer programming efforts and developing an effective research methodology. PROCESS: Our team will explore data, clean data, model, and perform analysis to support model deployment. RESULTS: Our team will use results as a baseline for use with other predictive analysis projects. FUTURE WORK: To carry this project to the next level, we aim to complete task of deploying a predictive model. Once deployment occurs, the team will understand where design improvements can be made.
Speaker
Speaker biography is not available.

Advancing Bacterial Mitigation on Hospital Floors: A STEM-Centric Exploration

Keerthana Dandamudi and Rachana Dandamudi (Lynbrook High School, USA); Sohail Zaidi (San Jose State University, USA)

0
Hospital floors are commonly laden with bacteria, acting as a major source for the spreading and transmission of viruses and diseases. The prevalent use of chemical solutions for bacterial mitigation poses risk to both patients and the environment. Our project aims to address this issue through a stem-based approach, integrating principles of physics, chemistry, technology, and engineering. We propose the use of plasma exposure to inhibit bacterial growth. To operationalize this technique, we designed and developed a special robot with specific parameters: a net weight of approximately 80 lbs, a maximum floor slope of 5 degrees, an operating speed of around 440 ft/min, a topping accuracy ~0.5 in, and a safety Factor of 1.5. The robot design features a heavy small-size gas cylinder, a microprocessor, plasma torch stands, and gas distribution and flow meters, along with a power supply and ballet resistors for operating the plasma torches. We conducted torque calculations to ensure the effective robot operation. For control purposes, the robot was equipped with multiple controllers: the TETRIS PRISM robotics controller, the MAX DC motor expansion controller, a PS4 controller, and a Tele Op control module enabling remote operation. The robot, maneuverable via a joystick, is capable of moving forwards, backwards, and sideways, which is essential for scanning the floor while the plasma torches are active. This robot systematically carries the plasma torches across the floor, subjecting the bacteria to a potent plasma jet and effectively bacterial presence. We utilized a Dielectric Barrier Discharge (DBD) plasma torch, innovatively mounted on the robot for autonomous scanning. In our experiment, the DBD plasma, generated by applying high voltages (~10kV, 40-50kHz) to gasses like helium or argon, was expelled as a jet or sheet, contingent on specific application. For experimental validation, standard hospital tiles were inoculated with E. Coli bacterial colonies and cultivated for 24 hours. Post-exposure to the plasma, an online app was used to count the bacterial colonies, observing a marked reduction on the treated tiles compared to the control group. Upon contact with the plasma, the reactive nitrogen and oxygen species crucially contributed to the destruction of bacterial colonies by damaging the bacteria's proteins, lipids, and DNA. Our presentation will summarize our exploration into bacterial mitigation and detail how we implemented a STEM-driven solution, employing plasma technology and innovatively designed hardware, to combat this critical health care challenge.
Speaker
Speaker biography is not available.

Protocol Verification to Extract Flavonoid Content from Various Coffee Species

Ashna Zavery (Crystal Springs Uplands School, USA); Sumanth Mahalingam (Evergreen Valley High School, USA); Sohail Zaidi (San Jose State University, USA)

0
This work is an extension of our ongoing research of flavonoids and their extraction from various coffee species. The extraction of flavonoids is important because these flavonoids are useful in sequestering reactive-oxygen-species, as well as in therapies for cancer, Alzheimer's, and other diseases. They also contain neuroprotective and cardio-protective effects. A protocol was developed for this study, to extract flavonoids in the 1st phase of the experiment. The current work in the 2nd phase of the experiment is to revise, verify, and upgrade the extraction protocol. In this study, flavonoid content levels and antioxidant capacity was explored across three different coffee bean species - specifically, the Coffea arabica, Coffea liberica, and Coffea canephora (Robusta) species. The filtered extracts of each coffee species were collected using hydroethanolic solvents and water-bath extraction, to maximize bioactive compound yield from each species. Thereafter, the Total Flavonoid Content colorimetric assays were utilized to characterize flavonoid content for each species while DPPH• (2,2-diphenyl-1-picrylhydrazyl) colorimetric assays were utilized to characterize antioxidant capacity for each species. The differences between each species' flavonoid content and antioxidant capacity were analyzed using UV-Visible spectroscopy. The absorbance values for the Total Flavonoid Content Assay were compared against a calibration curve made from (+)-Catechin, while the DPPH values were compared against a control to find inhibition percentages. Analysis of the data revealed that Robusta coffee beans contained significantly higher levels of total flavonoid content in mg of Catechin/mL, compared to the Arabica/Liberica beans. Moreover, the DPPH assay revealed that Robusta coffee maintained higher inhibition of the DPPH radical, indicating a higher antioxidant capacity. The protocol for the 2nd phase of the experiment was the same as the protocol for the 1st phase. However, the protocol of the 2nd phase called for bigger solutions of catechin for less inaccuracies. Other than that, the protocol verification was completed without any significant changes. The phase 2 results are under progress and will be presented in the upcoming conference.
Speaker
Speaker biography is not available.

Preliminary Results from Integrating Chatbots and Low-Code AI in Computer Science Coursework

Yulia Kumar, Anjana Manikandan, Jenny Li and Patricia Morreale (Kean University, USA)

0
This study investigates the application of chatbots and low-code AI tools in advancing Computer Science (CS) education, with a focus on the CS AI Explorations course and the AI for ALL extracurricular program. It addresses two main research questions: Firstly, the impact of chatbots on student growth and engagement in undergraduate research, and secondly, the potential of low-code AI platforms in bridging the gap between theoretical and practical AI skills. Conducted during the 2022-2024 academic years, this research presents a combination of case studies and empirical data to evaluate the effectiveness of integrating these technologies into conventional teaching methodologies. The preliminary findings indicate a significant transformative potential of chatbots and low-code AI, offering valuable insights for future educational strategies and the creation of more dynamic, interactive learning environments. To be precise, students' involvement in research was significantly increased. Future investigations will clarify the long-term effect of the chatbots and low-code AI integration.
Speaker
Speaker biography is not available.

Evaluating Edge and Cloud Computing for Automation in Agriculture

Alberto Najera (University Heights High School, USA); Harkirat Singh (Francis Lewis High School, USA); Chandra Shekhar Pandey, Fatih Berkay Sarpkaya and Fraida Fund (NYU Tandon School of Engineering, USA); Shivendra Panwar (New York University & Tandon School of Engineering, USA)

0
Thanks to advancements in wireless networks, robotics, and artificial intelligence, future manufacturing and agriculture processes may be capable of producing more output with lower costs through automation. With ultra fast 5G mmWave wireless networks, data can be transferred to and from servers within a few milliseconds for real-time control loops, while robotics and artificial intelligence can allow robots to work alongside humans in factory and agriculture environments. One important consideration for these applications is whether the "intelligence" that processes data from the environment and decides how to react should be located directly on the robotic device that interacts with the environment - a scenario called "edge computing" - or whether it should be located on more powerful centralized servers that communicate with the robotic device over a network - "cloud computing". For applications that require a fast response time, such as a robot that is moving and reacting to an agricultural environment in real time, there are two important tradeoffs to consider. On the one hand, the processor on the edge device is likely not as powerful as the cloud server, and may take longer to generate the result. On the other hand, cloud computing requires both the input data and the response to traverse a network, which adds some delay that may cancel out the faster processing time of the cloud server. Even with ultra-fast 5G mmWave wireless links, the frequent blockages that are characteristic of this band can still add delay. To explore this issue, we run a series of experiments on the Chameleon testbed emulating both the edge and cloud scenarios under various conditions, including different types of hardware acceleration at the edge and the cloud, and different types of network configurations between the edge device and the cloud. These experiments will inform future use of these technologies and serve as a jumping off point for further research.
Speaker
Speaker biography is not available.

Understanding Solar Weather

Lillian Wu, Isabella Vitale and Cecilia Merrill (Glen Ridge High School, USA); Corina S Drozdowski (Glen Ridge High School & Montclair State University, USA); Katherine Herbert (Montclair State University, USA); Thomas J Marlowe (Seton Hall University, USA)