Awards & Nominations

Syntax Error has received the following awards and nominations. Way to go!

Global Nominee

CustomDeep

High-Level Project Summary

We used a model whose name is Stable Difussion with a friendly interface. We developed an interface using an AI model (Stable Difussion) to create new picturse based on a description of something in text. We solved the challenge having a dataset to obtain secure and verified information and ilustrate it with new pictures in an artistic way. This is really important because of having wrong data or information could create a lot of problems in our society, however, with this solution we could have verified and attractive information.

Link to Project "Demo"

Detailed Project Description

Report


The amount of data about the cloud space is growing day by day, as is the difficulty of accessing it.


How can we develop an efficient way to display the user's search results on information portals in a creative and artistic way?


For a long time we have had the misconception that art and technology are incompatible. We at Syntax Error know that the best way to spread knowledge is to do it in a user-friendly way that everyone everywhere can interact with. The goal is not only to obtain a useful but also an attractive illustration, based on novel artificial intelligence models.


We developed CustomDeep, a web solution to make large and complex records accessible. CustomDeep is designed to be a dynamic experience. It creates entirely new images, it is the tool that allows the user to enter any set of words, for a visual interpretation of data. Making science more fun, easy and understandable with a simple click. 


The algorithm is an integration of different development technologies. Our solution was trained with great emphasis on open access resources from NASA, ESA (European Space Agency) and CSA (Canadian Space Agency). 


Therefore, its potential is on the one hand to use it to project ideas and share concepts, and on the other hand, to understand recent data and analyze them in depth. 

Likewise, CustomDeep aspires to be the basis for more advanced solutions both in avant-garde art and in quick access to updated information. In other words, its development is scalable, promising and with plenty of opportunities.


CODE: https://colab.research.google.com/drive/1ga5eXVvv8y72LSJTNXIHS6lwHAP7cOms?usp=sharing 


import streamlit as st

 

from torch import autocast

from PIL import Image

import torch

from diffusers import StableDiffusionPipeline

 

def image_grid(imgs, rows, cols):

    assert len(imgs) == rows*cols

 

    w, h = imgs[0].size

    grid = Image.new('RGB', size=(cols*w, rows*h))

    grid_w, grid_h = grid.size

    

    for i, img in enumerate(imgs):

        grid.paste(img, box=(i%cols*w, i//cols*h))

    return grid

 

tokenSD = "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"

 

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16, use_auth_token=tokenSD)  

#-----------

pipe = pipe.to("cuda")

 

st.set_page_config(layout="wide")

 

 

 

 

def app():

    st.title("Search engine")

    st.markdown(

        """

    Insert your prompt, we suggest to use [Lexica](https://lexica.art/) for better results.   

    """

    )

 

    with st.expander("See demo"):

        st.image("https://i.imgur.com/0SkUhZh.gif")

 

    row1_col1, row1_col2 = st.columns([3, 1])

    width = 800

    height = 600

    tiles = None

 

    with row1_col2:

 

        checkbox = st.text("Insert prompt")

        keyword = st.text_input("Enter an image idea:")

        empty = st.empty()

 

        if keyword:

            print(keyword)

            num_cols = 2

            num_rows = 2

 

            prompt = [keyword] * num_cols

 

            all_images = []

            for i in range(num_rows):

              with autocast("cuda"):

                images = pipe(prompt).images

              all_images.extend(images)

 

        with row1_col1:

            st.image(all_images[0])

            st.image(all_images[1])

            st.image(all_images[2])

            st.image(all_images[3])

 

app()

 


We set up all the things that are necessary to use the AI method "Stable diffusion". We have to install files to initialize the AI method and import libraries for the use of images which will be used by the algorithm of Stable Diffusion that contains around 12 billion of images in its datasets. 


In this part we can see the interface, which consists of a search bar where you can upload your prompts. (In this example you can see a promp for space crafts)

The code is going to create the new picture based on the combinations of words or phrases that we would like to use and it’s called “prompt”. After that, the AI generates 4 related images. All the new pictures will be based on the prompt and the text that’s in there and  in our case it’s “digital art of a Bolivian astronaut”. Furthermore, the algorithm will try to generate 4 pictures whose meanings will be something similar to a Bolivian astronaut in some artistic way. (Examples)

These are the 6 pictures that the algorithm tried to generate with its datasets and the prompts the code had. It can be seen that the code tried to add something which refers to Bolivia like the color of the Bolivian National Flag in the astronaut's suits or in the background. It seems the faces are from Bolivians, but it is not very secure because it doesn’t know who are Bolivians according to the datasets of the Stable Diffusion. Maybe it is not the best example for the potential that has Stable Diffusion, however, it is still really good because the algorithm created 6 new pictures because of the AI method “Stable Diffusion”. For better results we suggest in our web a page named “Lexica” where you can optimize your prompts for better results.



Also thanks to the technology of DreamBooth, we can fine tune the Stable Diffusion model so we can adapt the NASA datasets, an example using the prompt “rocket near the illimani mountain” for more details we add the next colab code: https://colab.research.google.com/drive/1yYP6ynqX62a8i4VPganfrS7klQPPnisk?usp=sharing 



Finally, we made an environment in which non programmers can access in an easy way the AI Stable Difusion model, solving the problem of displaying creative images and showing them in a simple form, that can help them to develop their artistics works without much effort.



Space Agency Data

The team used images recollected from 3 major resources. The data set used for training and fine tunning of our application were:

NASA resources, The Earth Observing Dashboard Containing Global Environmental Changes Observed by NASA and International Space Agencies https://eodashboard.org/ 

European space agency (ESA) Art & Culture in Space https://eodashboard.org/

Canadian space agency (CSA) CSA's Open Data and Information Portal https://www.asc-csa.gc.ca/eng/open-data/

Hackathon Journey

It was an amazing, busy and stressful experience because a lot of reasons like we had really good moments when our project was working or when we didn't know how to start or do something. We learned how to train an AI and use it with a lot of details and examples of it. We were inspired to choose this challenge because of the advance of technology and combinate this with art. Our approach was about collecting data, images from the given resources and a complete dataset of the AI model. We worked with a lot of organization in our group like everyone was doing something relevant to the project and we tried to be the more effective in our work. We would like to thank to the university, the providers of the resources and all the support we had.

References

https://github.com/giswqs/streamlit-geospatial

https://github.com/CompVis/stable-diffusion

https://github.com/XavierXiao/Dreambooth-Stable-Diffusion


Tags

#AI #txt2img #MachineLearning #ArtInOurWorlds #UserFriendly