Automatic Connector Cutout Recognition (Part 3)

Oct 1, 2019

If you just want to try out the demo, it is here! If you haven't already, I recomend reading part 1 and part 2 of this blog post series. At the end of part 2, we had a machine learning model that can classify images of circuit components that had been exported and was ready for use in some kind of production environment. This blog post will explore how I created a demo for the model using Flask and Docker that allows a user to upload a 3D model of a circuit component and then have the face(s) of the component that need cutouts in an enclosure identified and shown to the user. If you want to see the source, check it out here. Keep in mind this is just a demo, and not yet production-grade!

Flask Image Prediction API

The FastAI model export file generated at the end of part 2 can be loaded in any environment, as long as the correct version of FastAI has been installed. This makes it relatively easy to deploy a model with very minimal code. A server running the model and some kind of API was needed in order to return the output of the model. I decided to stick with Flask, the Python web framework that EnclosureGenerator has been built on. This was relatively straightforward, all of the code needed for an initial API can be found below:

from fastai import *
from fastai.vision import *
from flask import Flask, jsonify, request
import os
from PIL import Image
import io


app = Flask(__name__)


@app.route('/api/classify_image/', methods=['GET', 'POST'])
def classifiy_image():
    try:
        print(request.data["filename"])
        filename = request.data["filename"]
    except:
        filename = "random.png"
    image = request.files["image"]
    img = Image.open(io.BytesIO(image.read()))
    img.save(filename)

    # Classify image
    image = open_image(filename)
    learner = load_learner('.', 'export.pkl')
    prediction = learner.predict(image)[0]
    return jsonify(str(prediction))

This function simply will return a JSON prediction of the class of an image. Given the model exported, the only possible classes are "connector" or "not connector". Pretty simple! For those interested in an extremely quick way to play with deploying an ML web app, FastAI put up a sample web app using Starlette, an exciting new asyncronous web framework.

3D Model to Image Generation

Up until this point the creation of orthographic component images was done in batch-mode with many sample images using a few scripts. In order to generate images for classification I decided to use the Cadquery Python library. Cadquery is an excellent Python library useful for importing and interacting with 3D models. For this demo, the user will be limited to only uploading STEP files, a specific kind of 3D model file. Most CAD applications can export STEP files however, and it is the most common 3D filetype offered on many vendor websites like Digikey or Mouser so this shouldn't be an issue for most users.
One interesting thing to note for any Cadquery users/developers out there - I eventually settled on using Cadquery 1.0 using the FreeCAD backend, as the newer (and still in development) 2.0 had absymal SVG performance. This likely has something to do with Cadquery 2.0 switching to pure OpenCascade as the geometry kernel, but I haven't yet figured out why the performance between the 2 implementations is so drastically different. These performance considerations are outside the scope of this blog post, however, so I won't be focusing on them anymore.
Unfortunately Cadquery can only export SVG images, which must be converted in order to be classified by the FastAI model. Using the wand library makes this pretty straightforward:

def create_images(connector_file, folder='ortho_views'):
    """Generate images from STEP file."""
    if not os.path.exists(folder):
        os.mkdir(folder)
    connector = cq.importers.importStep(connector_file).combine()

    image_filenames = []

    for view_name in VIEWS:
        v = VIEWS[view_name]
        svg = connector.toSvg(view_vector=v)
        svg = process_svg(svg)
        img_name = os.path.join(folder, connector_file.split(".")[0] + "_" + view_name + '.png')
        image_filenames.append(img_name)
        svg_blob = svg.encode('utf-8')
        with Image(blob=svg_blob, format='svg') as img:
            img.format = "png"
            img.trim()
            img.transform(resize='200x200')
            width, height = img.size
            height_border = (200 - height)/2
            width_border = (200 - width)/2
            img.border(Color('#FFFFFF'), width_border, height_border)
            img.sample(200, 200)

            img.save(filename=img_name)

    # Return the list of filenames
    return image_filenames


This function takes in a STEP file and generates PNG images of the orthographic views of the model. These orthographic views are then classified using the classifier developed in the part 2 of this blog series. Once the images have been classified, they will be grouped together into a larger image with the views identified as needing a cutout outlined in red. This image will then be shown to the user, an example looks like this:


Classification image of a surface-mount Micro USB component. Note the image that is bordered in red is the side of the component that requires a cutout to allow for a mating plug connector to be inserted.


Docker Implementation

Up until this point the image classifier Flask API and the image generation Flask app were run and tested separately using different Python environments. Trying to run them on a single server was going to be a problem however, as the Python version needed for the image generation was 2.7 due to Cadquery, and the Python version needed for FastAI was 3.6. By turning both Flask apps into docker images that were connected via a local network, both Flask applications and their dependencies could be managed separately. This was my first real experience with Docker and it was a true pleasure to get away from dependency worries with such ease. See below for the docker-compose file:

version: '3'

services:
  web:
    build: './flask_test'
    image: 'flask_test'
    command: python app.py
    ports:
      - '8000:8000'
    networks:
      - front-tier
    volumes:
      - './flask_test:/app'
    environment:
      FLASK_ENV: development
  cq:
    build: './image_generator'
    image: 'cq_test'
    ports:
      - '5000:5000'
    volumes:
      - './image_generator:/app'
    networks:
      - front-tier
      - back-tier
    environment:
      FLASK_ENV: development
  ai:
    build: './image_classifier'
    image: 'fastai_test'
    ports:
      - '5001:5000'
    volumes:
      - './image_classifier:/app'
    networks:
      - back-tier
    environment:
      FLASK_ENV: development
networks:
  front-tier:
  back-tier:

In order to test the end-to-end functionality of the image generation and image classification containers, I wrote another Flask app running in a separate container that was simply serving a basic file upload page, checking user uploaded files were acceptable, and then routing those files through the image generation and image classification servers. This Flask app is running in the flask_test container in the code above.

Deployment

It was finally time to deploy the component classification functionality so that it could be demoed by anyone. DigitalOcean is the host for EnclosureGenerator, and I have only had good experiences with it so I kept on using it for this demo. For anyone wanting to do something similar, this walk-through largely covers setting up a Flask server on a DigitalOcean "droplet" (server). One thing that stumped for a while was during the initial creation of the FastAI Docker image on the droplet, where a strange error kept preventing pip from installing all dependencies. After some sleuthing, it seemed that the lowest tier droplet didn't have the memory requirements to even install FastAI. The lowest-tier DigitalOcean droplet at the time of this writing has only 1GB of memory, and I had to move up to 3GB of memory in order to install the FastAI dependency. After installing, the memory could be dropped back to 1GB.



It works! Shown here with a classified simplified HDMI connector.


Once the memory issue had been resolved, the servers were deployed. After adding a basic demo page to EnclosureGenerator site, it is ready for action!

Conclusions

As is often the case, the deployment of the machine learning model was more challenging than building the actual model. This is a very simple deployment on a single server, and I would expect the difficulty to scale up once greater scale is required. Docker was a powerful tool during this project, and made managing dependencies, testing APIs, and individually managing different components quite simple. It has quickly moved from a technology I had only played with to something I could not do without.

Future work

If you have any recommendations on new features, please take this survey and let me know what you think!

Some improvements to make in the future: