AUGMENTED MACHINE VISION


The objective of my thesis was to train a Neural Network on 3D building features to generate novel point cloud outcomes using Unsupervised Machine Learning. As AI becomes more widespread in architecture design processes, understanding what it is capable of and how to deploy it effectively is essential. Furthermore, unsupervised machine learning is a new way that we can augment our knowledge as designers and let neural networks learn and generate components of design without our input or control over its different parameters.  

DATABASE GENERATION

I leveraged generative modeling techniques to construct a database of three-dimensional building forms to train the neural network. I modeled 12 forms of existing office buildings that are located in a similar climate zone to New York City, which is the location of the final design. To augment those models of existing buildings, I used deformation and blending tools to create a subset of each building that was responsive to manual deformations applied. In a design process augment by machine learning, creating a database is a key set of design decisions, therefore my design process was rooted in a mix of existing and phantom precedents. 

The models were then converted to point cloud models to train our Neural Network. This is the medium required for the neural network to learn. This is both an essential part of the workflow and an interesting moment because the process of training requires a reduction of information to only x, y, z spatial coordinates. So this ML-augmented design process has some bottlenecks in how on the front end it requires simplification and on the back end, it requires augmentation of things like material and program. 

TreeGAN

TreeGAN is a 3D Point Cloud Generative Adversarial Network Based on Tree-Structured Graph Convolutions that was created by researchers at Cornell University. We choose this model because it has the ability to generate 3D point clouds in an unsupervised manner. The tree-GAN contains two networks, the discriminator and generator. The generator (called the Tree-GCN) takes a single point from a Gaussian distribution as an input. At each layer of the generator Graph Convolutions and Branching operations are performed to generate a set of points. The generator produces 3D point clouds as outputs. The discriminator differentiates between real and generated point clouds to produce a more realistic outcome. Once both networks are adversarially trained together, the generator (or Tree-GCN) is used to produce novel 3D point clouds by sampling a learned latent space.

Previous
Previous

THE SOCIAL CANOPY

Next
Next

OZZIE