Working on automatically segmenting images based on color for sewing. This is one of the toughest hard CS problems I’ve jumped into, especially because I get stubborn when results fall below expectation.
Current goal: segmenting and fairly cleanly drawn jpeg/png. Small number of colors, fairly continuous and clean areas, but some dirty edges.
I grabbed a Matlab license because hand-rolling all those image processing algorithms was taking too much time and wasn’t being efficient enough.
This is our input image, a drawing of me made by our game artist from Pattern (Jeeze, that was 6 years ago?).
I found using LAB color space seems to work best, not quite sure if it will work best for other types of images, but it can capture most color difference using just a (reddish blue) and b (yellowish green) layers. The color-focused layers.
Using a kmeans clustering algorithm on a*b using ~16 colors, these are the sections it found (each cluster is a different shade of gray):
I’m pretty happy with this as a baseline. The main problem, though, is that ignoring L causes some colors to be conflated. Namely white and black (the background and the shoes/pants), which would have opposite L values. I haven’t found many tutorials about programmatically guessing if a cluster needs to be binarized further (based on L), so I’m really forging ahead on my own now. I just hope it works. Wish me luck!