High-Level Project Summary
We developed a deep learning approach based on autoencoding architecture which encodes and decodes input data. this solution helps keep the noise from water vapor to minimal levels and allow for accurate capturing of earth's surface in a timely manner.
Link to Final Project
Link to Project "Demo"
Detailed Project Description
The accumulated noise caused by tropospheric water vapor while inSAR observes Earth deformations over time stamps prevents accurate collection of data which interferes with their analysis.
We will be implementing a deep learning approach using transformer based autoencoder architecture to tackle the noise and improve the time series accuracy.
Autoencoders improve data representation by a process of compressing and decompressing the data thus recreating the input. While the transformer architecture encodes information between its input and more efficient than previously used techniques such as Recurrent Neural Networks
Some previously used solutions include Temporal filtering methods where low-pass filters are used to limit stochastic noise from inSAR time series. However, there are setbacks about using this technology as parameter settings can drastically change due to seasonal fluctuation.
Compared to previously used deep learning approaches (LSTM, RNN), the proposed architecture is proved to be more effective as the two main difference between Transformer Architecture and Recurrent Neural Networks is removing recurrence computations which allows multiple processes to be executed at once producing faster and more robust model.
Our model helps produce more accurate and fast processing to raw data and more clean results for analyzers allowing easier detection for trends and patterns in the data.
We will be using software packages such as pytorch, matplotlib and numpy. In addition, NASA’s available data will be used.
Space Agency Data
We used NASA open source data on imaging techniques of inSAR. in addition, data from DeepCube startup company based in Tel Aviv was used to optain images to train our model on as their data distinguishes between accurate data from deformed data from noise.
Hackathon Journey
NASA was an amazing experience where many teams come together to work on their projects. We learnt how to manage a project in a team.
the inSAR challenge was a great fit for our team as our solution included using Deep Learning which is a point of strength in our team.
References
We will be using software packages such as pytorch, matplotlib and numpy. In addition, NASA’s available data will be used.
In the demo an image from Masked Autoencoder paper was used. In training the model, we used DeepCube dataset and pretrained SimCLR model.
Tags
#space, #cairo, #inSAR, #deeplearning, #dataprocessing, #denoising, #Autoencoder

