Using Deep Learning and Aerial Imagery, we first detect roof edges from the high res. aerial imagery and then post-process it to respective roof segments.
For 3D reconstruction, we use elevation data to approximate roof height and tilt which then is being used for a geometrical 3D reconstruction of the house based on the surface of each roof segment.
Using the same ConvNet architecture we predict roof obstructions, and then we place the maximum allowed number of solar panels considering on fire setback areas (yellow colored areas) and the predicted obstructions.
By performing shading analysis using elevation data and approximated sun positions, we then select optimized panels for energy production.
3D building reconstruction from point clouds is an active research topic in remote sensing, photogrammetry and computer vision. LiDAR (light detection and ranging: is a remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. Among other applications, LiDAR is useful for detailed mapping of terrain, elevation, structures, and change detection in disaster management at several levels. The field is rapidly maturing in capabilities, applications, and utility. LiDAR data have rich use cases in city management and damage assessment applications. One crucial processing task in these LiDAR applications is building detection. Digital 3D city models serve nowadays a wide range of application fields, such as urban planning, environmental simulations, navigation, location-based services, virtual 3D globes and 3D landscape visualizations, etc. Automated building detection from airborne LiDAR data sets is a challenging task. We briefly review recent approaches to reconstruct 3D buildings from multi-view images or photogrammetric point clouds/DSMs (Digital Surface Model). We divide these approaches as follows: data-driven methods and model-driven methods the data-model hybrid methods are classified as model-driven methods. Our proposed method belongs to model–driven. The framework for our 3D building reconstruction method consists of three main parts. Building footprint extraction. Building height estimation (Building height is the distance from a building’s base to its rooftop. Each LoD1 building (i.e., the prismatic model with horizontal roof and base) has a constant height value. The commonly used approach for estimating a building’s height is subtracting a produced DTM (digital terrain model) from the DSM within a building footprint. In this part we are going to use Gevaert idea which proposed a method for DTM extraction from imagery which first applies morphological filters to the Digital Surface Model to obtain candidate ground and off-ground training samples) Combination of the footprint and the height between the roof and base form a water-tight LoD1 building model.
LiDAR data collection: By calculating the return time, LiDAR measures distance of each point of each object on the earth.
Learning and comprehension are two issues in all sciences because learning does not happen until comprehension takes place, and vice versa, learning is the prelude to comprehension. Although it can be said that perception originates from all the senses, but vision plays an important role in human perception. The human mind needs a three-dimensional structure to perceive the problem, but the human eye has a two-dimensional structure. Helping to get a better view of the problem has a consistent result, so three-dimensional structures are important. So far, different solutions have been introduced to create a three-dimensional structure, but the ease of use and the cost-time cost is always considered, so the need for algorithms with such capabilities is felt. wait! Let's think of an algorithm that makes a single image presented by the user three-dimensional ?! Isn't that great? So let's take a look at the features of this algorithm.
Key features of the algorithm:
Design or take photos of the target
Presentation as input
3D input modeling
Challenge: 3D image reconstruction from 2D
It is very difficult to reconstruct an image without having information from other profiles and only from one image. If the training is good and the user's goals are not scattered, the model will work well for reconstruction.
Solution: 3D image reconstruction from 2D
We built a 3D image modeling module to convey concepts. This module includes two models for learning and reconstruction. Deep-learning models use pre-made 3D images so that they can use them for their purposes.
The user does not have to search many lists for their purpose, this system automatically makes matches based on image analysis. This is done in two ways: the user has an image of their target or provides an initial image of the target.
We used AE for 3D reconstruction. ShapeNet was analyzed with more than 120 categories.
our approach:
1. Provide a two-dimensional image of the target by the user
2. Check the input image and infer the parameters
3. Separation of light, shape, and gesture of the input image
4. Build a three-dimensional model of two-dimensional input
We have created a neural network to detect and build. Our inference-based solution allows reconstruction through a user-given image, based on gesture, shape, and light characteristics. This helps in various industries such as game development to be able to create 3D shapes from a single image at a reduced cost.