High-Level Project Summary
stART Exploring is a concept app that uses ML algorithms to output images based on phrases the user inputs. The images are displayed in creative ways such as in different art styles and even as games. Users also have the option to edit and share their generated artwork on social media as well as turn it into music. If the results are appealing, the user can save their work in a library. stART Exploring is a bridge between science and art, expanding technical knowledge into accessible features. By using machine learning and technological innovations combined with space agency data, this app creates a space for anyone to explore!
Link to Final Project
Link to Project "Demo"
Detailed Project Description
stART Exploring is a concept app that incorporates machine learning and artificial intelligence to generate images based on text phrases. The algorithm is trained using images available through NASA Open API and NASA’s Image Library. Stable Diffusion techniques are used to generate images using artificial intelligence and allow the generated photos to appear in different art styles. Additionally, to make this app accessible to everyone, the image can also be generated as a puzzle to appeal to younger children. After the image has been generated, the user can edit how the image appears. Options include contrast, brightness, tint, and sharpness. After any edits have been made, users can save the photo to their phone or share it on social media. If they wish to save a generated image to the app, they can click on the icon and save it to the library located in the user profile. Besides just an image, stART Exploring also has an option for the image to be converted into music. Depending on the color tones and vibrancy, the tempo and beat change, thereby improving accessibility even for users that are sight-impaired to continue to perceive the vast possibilities offered by space.
When developing this project, we wanted to focus on user experience and the creativity that can stem from data. To begin, we used NASA API and NASA’s Image Library to train the algorithm to recognize images corresponding with a user’s text input. Moreover, we also wanted to develop features that allowed users to generate the image in a certain style leading us to research stable diffusion. The tools we used for the back end of this app are Google Colaboratory, Python, HuggingFace, and NASA Image Library. For the front end, we wanted to create an easy-to-use and imaginative interface. Our team members explored how to use various software, but ended up using Adobe XD. This was used to create the appearance of the app. Users of our app are not limited by their language, as our app supports multiple languages including Arabic, Hindi, Korean, and many more. For users that may be illiterate or immobile, our app continues to hold open the window for curiosity, by offering speech-to-text options to enter in a phrase instead of physically typing, making SPACE for all.
Overall, we hope stART Exploring creates a space for anyone to explore what combining science and art can create. By using NASA’s new cloud-based data storage, stART Exploring has the ability to bring galaxies closer to Earth.
Space Agency Data
NASA’s Image and Video Gallery was an integral part of our project, serving as the source for searching “entities” identified in the user’s search prompt. Through the NASA Image and Video Gallery API, the matched images and their respective descriptions are outputted to the user and using these images from NASA as a basis, our app allows the user to select their desired image and alter the image according to what they are looking for.
Hackathon Journey
Astro-Unite is a group of high school students who met through our mutual love of computer science and astronomy. The Art of Our World was an enticing project because we had the vision to combine education and science into an artistic project. We have loved learning about our universe and combining new technology with astronomy in our own lives. This challenge was an opportunity to spread our love and create a safe environment for everyone to start exploring. Our team aimed to close this divide and create space for everyone to enjoy what technology can bring. When encountering challenges with our code or finding data, we used the resources offered by Space Apps and searched through the Internet for a solution making sure to never give up. Through detailed wireframes, collaboration, and research we combined our back-end and front-end ideas, to create our final concept.
References
https://github.com/kzhtonychen2023/Art-in-Our-Worlds Implementations of concepts such as search algorithms, natural language processing, speech-to-text transcription, image editing/manipulation, and more
https://images.nasa.gov/ NASA Image and Video Gallery
https://www.ibm.com/cloud/watson-natural-language-understanding IBM Cloud Natural Language Processing
https://api.nasa.gov/ NASA API
https://huggingface.co/ Hugging Face
https://www.dataquest.io/blog/python-api-tutorial/ API In Python
https://realpython.com/image-processing-with-the-python-pillow-library/ Image Processing With the Python Pillow Library
https://gist.github.com/ricardodeazambuja/03ac98c31e87caf284f7b06286ebf7fd Microphone Input from User On Google Colab
https://www.unspokensymphony.com/make-a-melody Image to Music
https://www.thepythoncode.com/article/using-speech-recognition-to-convert-speech-to-text-python Speech File to Text Python
Tags
#ML/AI #art #machinelearning #astronomy #computerscience #adobexd #googlecolab #stablediffusion

