High-Level Project Summary
Mind Cosmos is an application that connects your emotions to vast repositories of space data with only an electroencephalogram EEG headset and a web browser. Wearing the headset, the user can input text or voice which searches PubSpace for matching publications. Upon selection of an article, the application studies the publication, examines the mood of the inputted text, and analyzes the user’s brainwaves to generate a digital piece of art reflecting both the science of the research and the user’s state of mind. Lastly, the application will also play a recommended song on Spotify. Through Mind Cosmos’ immersive experience, the user journeys both the cosmos in outer space and in their minds.
Link to Final Project
Link to Project "Demo"
Detailed Project Description
“We are made of star stuff,” said Carl Sagan. The mere idea that the matter of the human brain, that dictates our emotions and our we perceive the world, is not so different than what spins galaxies and undergoes atom-breaking nuclear fission is what makes space so exciting. As NASA moves its data to the cloud, ML and AI open new avenues to explore our connection with the world beyond Earth’s atmosphere.
Voice and Text to Mood Natural Language Processing:
The application can achieve conversion from voice to text through the AWS Amazon Trascribe API, where real-time audio is captured using the Sound Device library which chunks and streams audio snippets to the cloud for transcription. A natural language processing model was created using the Cohere API, and was trained on 5 mood labels: happy, sad, chill, angry, and stressed. For example, the training phrase “Amazing James Webb photos investigate potential extraterrestrial life” is happy, and “Terrifying comet spirals towards Earth, beware!” is stressed.
NASA Publication Analysis:
To analyze the articles and find key terms, Term Frequency - Inverse Term Frequency (tf–idf) was used. To do this, six papers were found on the NASA website and saved as text files. Then, Python was used to iterate over each file and complete a count of the number of times each word appeared. Each specific word was then assigned a specific tf-idf value, depending on how many times it appeared in the other articles. The top three words with the highest tf-idf were then found to be used in the image generator.
Brain Computer Interface for Emotional State Evaluation:
For our demo, we used the 8-channel OpenBCI Cyton Board and the OpenBCI python packages to gather Electroencephalography (EEG) data from key areas of emotion-related brain activity (specifically the left and right lobes - FT10, FT9, T10, T9).
The Mind Cosmos Application would use pre-trained Representational Similarity Analysis (RSA) to identify the user's current emotional state. A short sample of EEG data (around 1-2 seconds) would be recorded, and passed into an RSA model. By calculating the dissimalarity measure of the input sample and pre-classified EEG samples, the application would identify which emotion label best fits the user's current state. For the most accurate results based on EEG, the possible emotions would be excited, calm, angry, and sad. The brain waves associated with these emotions are easier to distinguish because they are closely tied to brain activity and the frequency of brain waves recorded via EEG.
Based on an identified emotion, pre-determined words that match said emotion are passed into the image generator, along with the relevant words identified in the texts. The identified emotion fundamentally "wraps" the other input text when generating the final image.
Unfortunately, we were unable to create the desired RSA model. However, we were able to stream the EEG data using python and create a training file with "Happy" and "Sad" classification. We also attempted to identify emotions using structured data classification, but struggled with outputting the final results based on a live EEG input stream.
Spotify Music Recommendations:
A variety of Spotify APIs were used to manipulate through playlist and select song recommendations.
Image Generation:
The Stability AI API was used in the generation of the piece of art, and functions by inputting a phrase of text.
Space Agency Data
Part of the project uses pdf versions of research publications found on the NASA website under the “Research Publications” tab. These papers were analyzed using tf-idf to find keywords for image generation. Six papers were used for the proof of concept (Kinodynamic-RRT for Robotic Free-Flyers: Generating Feasible Trajectories for On-orbit Mobile Manipulation, GuSTO: Guaranteed Sequential Trajectory Optimization via Sequential Convex Programming, Robot Spacecraft Hopping: Application and Analysis, HTC Vive: Analysis and Accuracy Improvement, Astrobee: A New Tool for ISS Operations, Joint Visual and Time-of-Flight Camera Calibration for an Automatic Procedure in Space), but more would be used if the program was implemented on a larger scale. This would help to increase the accuracy and efficacy of the program.
Hackathon Journey
Our team was very excited to have the opportunity to participate in such a cool challenge, and we all feel like we have learned a lot. We were inspired to pick the "art in our worlds" challenge because we all wanted to work on something exciting and interesting! One special aspect of this hackathon were the very interesting and guided challenges. Our team was well supported by the resources provided for our challenge, and we felt like the specific guidelines for "art in our worlds" helped our brainstorming while giving us creative freedoms for our project. Additionally, it was most of our team members' first time working on a project that was mainly focused on creating something artistic, which was a neat bonus! The challenge we picked also allowed us to work with technologies that we did not have much experience with. We all learned more about text sentiment analysis, EEGs & RSAs, and image generation!
The major setback our team faced was coordination, as we were all working virtually and were unable to help bounce ideas off each other as much as we would have liked. That being said, we believe our team did a good job delegating tasks and making sure everyone was engaged and communicating. Our team is very proud of not only our final product, but also what we learned along the way. We are all grateful for the opportunity to work on our project in such a cool hackathon!
References
https://rsatoolbox.readthedocs.io/en/latest/distances.html
https://docs.openbci.com/Cyton/CytonLanding/
http://brian.coltin.org/pub/conceicao2018joint.pdf
http://brian.coltin.org/pub/fluckiger2018astrobee.pdf
https://apps.dtic.mil/dtic/tr/fulltext/u2/1069645.pdf
https://apps.dtic.mil/dtic/tr/fulltext/u2/1069435.pdf
https://arxiv.org/pdf/1903.00155.pdf
https://arc.aiaa.org/doi/pdf/10.2514/6.2018-2517
Tags
#art #EEG #emotion #software

