Company: --
Role: Programmer
Tools: Python5+, NodeJS

AI Assisted Design

February, 15, 2018

A personal exploration of the Airbnb AI assisted Design by the design Technology team, lead by Benjamin Wilkins

The amount of time it should take to test an idea shold be quick. Most projects require explorations, iterations, mockups, and prototypes. With each being a very time consuming phase, being able to streamline is far too important. To imagine the ability to skip phases in a workflow or development cycle and feasably translate an idea to a finished product seems daunting, but with machine learning and the ability to identify components makes it possible.

This was a self-exploration using a neural network image classifier. By identifying hand drawn shapes against training data, items correctly identified as the closest match to its counterpart are then passed to the renderer to generate web elements.

Due to the specifics of this project and the elements that needed to be rendered, the initial data set consisted of around 100 data samples. Having this low of a sample set is extremely unreasonable to train against a model. My solution was to manipulate each input image through an array of transformations, each compounding on the next. This would help simulate the potentially endless variety at which the camera, in this case, could approximate an objects shape depending on how the view was set up. For example, the initial image box input has a transformation to its perspective, which in turn runs through slight rotation, skew, and global proportion variations. In the end, with each compounded transformation, I had ended up with a data set of over 50,000 samples to train and test against.

Cleaning and balancing the data set prior to training the model was all on the basis of which element type had the least amount of entries and culling the other elements to that length. This garunteed that the data sets had equal representation against the model so one data set was not at a bias over the rest.

              # Python 3
              # Balance Data
              mod_header = []
              mod_item = []
              mod_item_expanded = []
              mod_create_item = []

              header_arr = [1, 0, 0, 0]
              item_arr = [0, 1, 0, 0]
              item_expanded_arr = [0, 0, 1, 0]
              create_item_arr = [0, 0, 0, 1]

              test = []
              num_h = 1
              num_i = 1
              num_i_e = 1
              num_c_i = 1

              duplicates = 0
              test_num = 1

              for data in train_data:
                  img = data[0]
                  style = data[1]

                  if style == 'module_header':
                      mod_header.append([img, header_arr])
                      if num_h <= test_num:
                          test.append([img, header_arr])
                          num_h = num_h + 1
                  elif style == 'module_item':
                      mod_item.append([img, item_arr])
                      if num_i <= test_num:
                          test.append([img, item_arr])
                          num_i = num_i + 1
                  elif style == 'module_content':
                      mod_item_expanded.append([img, item_expanded_arr])
                      if num_i_e <= test_num:
                          test.append([img, item_expanded_arr])
                          num_i_e = num_i_e + 1
                  elif style == 'module_create_item':
                      mod_create_item.append([img, create_item_arr])
                  if num_c_i <= test_num:
                      test.append([img, create_item_arr])
                      num_c_i = num_c_i + 1

              # Get shortest length of data
              # Gets the numerical value of the shortest list supplied
              max_num = min(len(mod_header), len(mod_item), len(mod_item_expanded), len(mod_create_item))
              print("Max per type: %s" % max_num)

              # cull_length is a kwarg in function not shown in full
              if cull_length:
                  final_data = mod_header[:max_num] + mod_item[:max_num] + mod_item_expanded[:max_num] + mod_create_item[:max_num]
                  final_data = mod_header + mod_item + mod_item_expanded + mod_create_item

The latter end of the project was building a templated web page based off of the results found. On verifying the shape against the trained model, that result would pass a hard-coded chunk of html source and append to the html file with proper formatting. When all images were identified, the generated html file was saved, letting know the renderer that the file has been updated and the browser to be refreshed of those newly applied changes to the html document.

            # Python 3
            # List for storing page consist
            current_frame = []
            # Loop through each isolated image and predict classification
            for img in img_isolated:
                img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
                img = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY_INV)[1]
                img = cv2.dilate(img, np.ones((3, 3), 'uint8'), iterations=1)
                img = cv2.resize(img, (250, 250))

                # Predict choice against trained model
                prediction = model.predict([img.reshape(WIDTH, HEIGHT, 1)])[0]
                prediction_choice = np.argmax(prediction) + 1

                if prediction_choice == 1:
                elif prediction_choice == 2:
                elif prediction_choice == 3:
                elif prediction_choice == 4:

            for classification in current_frame:
                print('%s' % classification)

            web.create_template('WebView', current_frame)

            # Set List to empty after template creation
            current_frame = []

The final result of the project was much more solid than I had initially expected. Check out the example below: