High-Level Project Summary
V Legacy seeks to solve the challenge of safeguarding facts, scientific advances throughout history and the context in which it develops along with the impact it has on society and together with this use the information to model immersive scenes and allow interaction with those data.
Link to Final Project
Link to Project "Demo"
Detailed Project Description
Objectives
Our main objective is to preserve the scientific legacy without neglecting its historical context, the applicability of technical advances and their effect on humanity.
On the other hand, we seek that what is collected can be educational for anyone when interacting with the model.
tages
The project consists of two stages, the first with data collection and the second, once with these processed data, generate the virtual reality model.
We will detail that the AI must perform sequentially when taking the data.
We will take from different media through a natural language processing, which will perform a categorization of the different featured titles among the newsgroup. These topics are then evaluated according to the interaction that takes place on social networks and thus generate an assessment of each piece of news, filtering whether it is true or false news and also whether it is really an important fact
Then, through the generated tags, we will link the fact with the scientific reports generated by NASA (NTRS), in this instance an AI will come into play that generates a text from the previously generated key tags. This model to date exists and is a project of the OpenAI company where through this technology called GPT3.
This would be saved on a server with the required fields to locate the fact, such as its date, tags and/or its associated topics.

hen we will enter the second stage of the project, where we seek that the person can experience the events that occurred through virtual reality with immersive scenes generated automatically with AI.
Sequentially, with the information saved thanks to the first stage. A folder will be created with illustrative images of the scenes generated from the texts, using a technology such as DALL E 2. Subsequently, a scene is sought to be modeled from the content of the folder, with which we generate a 3-dimensional field with which we will be able to interact, we can currently see this technology in NeRF.

With this 3D model, we virtualize it so that we can later interact with virtual reality glasses.
Space Agency Data

V Legacy°
Hackathon Journey
This Space Apps Challenge really makes our bond together. Not only that we learned new things, but on the other hand, this challenge has brought us closer and we are able to know more about each other. There are two things that we have learned throughout this Space Apps Challenge. First, two of our members think it is the coding language. This is their first time learning the language dart, which is really challenging for them. But with the help of the seniors, they were able to learn and overcome together. The second thing that all of us have learned is collaboration. This challenge really requires a lot of time considering that we still have a lot of online classes and extra tuition. The fact that we tried to have a meeting time for everyone to attend is nearly impossible. Even sometimes we are facing hardship but we try to understand one another and think from others' perspectives. At the end of the day, it’s been a delightful journey to work with everyone
References
Resource
Quillbot (AI generating keywords from a text ): https://quillbot.com/summarize
Gpt3 (AI that automatically generates text): https://beta.openai.com/docs/introduction
DALL E 2 (AI autogeneradora de imágenes a partir de texto): https://openai.com/dall-e-2/
Nerf (Autogenerador de escenas tridimencionales a partir de imagenes): https://www.matthewtancik.com/nerf
Demo https://drive.google.com/drive/u/0/folders/1hCGnJFLZiCZUGXcIISbltVpo1PbW7Poa
Tags
AI Machine Learning Data

