Published: 22nd November, 2019

A website that uses machine learning to identify the flavour of a jelly bean you point your camera at.

The Background

Last weekend some of hackSheffield went to HackNotts! In the first two hours of hacking my team just couldn’t decide what to make for our project until I suddenly had an idea:

"What if we used machine learning to identify the flavours of the Capital One jelly beans?"

Whenever Capital One attend a hackathon, they always give away small conatainers of jelly beans. The flavours are not identified on the packaging, and some beans are renowned for being unpleasent (cinnamon, which is sometimes mis-identified as chilli it’s so strong). So we decided to use a neural network to identify the flavours by via camera.

The Data Collection

One of my primary tasks was tasting and categorising the beans in order to be able to train a model. I’ve never eaten so many jelly beans in my life, I don’t even like jelly beans particularly. Nonetheless, I am now an expert in Capital One jelly bean flavours (seriously, if you want to know a flavour just ask me). Once the beans had been categorised, we then took videos of the beans, moving around them at different angles. Videos in different lighting conditions were also filmed. We then split the videos into their induvidual frames to use as the images to train with.

A photo of the classified beans

The Neural Network

We used Python with to build our model to identify the beans with. We write this within Google Colaboratory as it gave us access to technically infinite computing power for free; When our program began to crash as we ran out of RAM, Colab eventually gave us more RAM to work with. We Trained our model using a convolutional neural network. We also had to solve our RAM problem by balancing the efficiency of running with the accuracy of the model in order to get a suitable compromise. Having no experience with TensorFlow or neural networks, we first examined a Tensoflow example of a handwriting classifier. However it was quite different to our final network, one example being that the handwriting was processed as black and white, which obviously would not have worked for identifying different coloured beans! Using the photographic data we collected of the beans, we split the frames into testing and training data for the model. In training phases our model scored as highly as 98% accuracy, however in practice the accuracy is not as high.

The website

The website is mostly built in JavaScript and served via Netlify. It loads the model from Google Cloud and shows the camera feed in a window. When a bean is found in the camera image it shows the cropped bean image on the lower window, with the identification named ontop of the bean. The identification is found using the model and TensorFlow.js. Once we overcame the RAM issue, we discovered that the model basically wanted everything (bean or no bean) to be mint sorbet flavour. We believed this to be either an issue within the training methods, or a problem with the data we collected/not enough data. Despite these issues we still had some correct identifications. The program tended to be very consistent in it’s classification of any given bean and was often picking a close colour if it was incorrect, so we believed that there must only be a minor fault in the code somewhere. We unfortunately ran out of time to find this fault during the hackathon.

The Result

HackNotts 2019 was a fantastic experience, we learnt so much about convolutional neural networks and TensorFlow, none of us had any experience with either. We’ve learnt a lot about gathering test data, and how difficult it can be to train a neural network properly. Our hack was a finalist, which meant that we demonstrated beanify to all the attendees at the event. Throughout the entire event we had really positive responses from both attendees, sponsors and organisers. Several people came up to us and asked if I had OCD, or why we had SO MANY beans. When we explained our project people were quite interested, and wanted to try out identifying beans. Our team won the prize for “Best use of Google Cloud”!

The devpost for this project is here, although the write-up you’ve just read is more in-depth that the content there.