Week 10

Pixel Oracle, Web Experiments


Pixel Oracle Website

Over the weekend, I was thinking about my pixel oracle experiment and felt there was more I could do with it. So I set myself a goal of materialising the system into a website.

v1: Wireframe + TeachableMachine Model

Wireframing + Interative TeachableMachine Model

I started with a brief instructional/explanation on the website. Followed by the interactive portion on the right, in which I had to upload my TeachableMachine model. Then I created a system that allowed the user to upload their own images to receive their reading.

Brief ASCII Rabbithole

I got a little distracted along the way, and thought about how to present the title text "pixel oracle". I wanted to see what it would look like as an ASCII text.

ASCII generation



I then tried placing it where the title text would be, and I couldn't get it to be in the right format on VSCode.



At some point, I accidentally pasted the ASCII into the google search bar, and thought it looked pretty cool as a single line. (In fact, what if our searches were visualised as blocks like these, instead of text, almost like morse code?)



So, I ended up using the ASCII art as a divider for the page.



This was a slight digression, but it just got me interested in thinking of the deconstruction of ASCII text, and how it could symbolically hold the meaning of the orginally text, even when pulled apart.

v2: Interactive Model Testing + Explanation

Website + Explanation

The final website is pretty much the same, but I included a area below that explained the concept of the system. I wondered if I was overexplaining but I decided to just go ahead first, and then do some user testing afterwards.

Asking a tree for answers

Asking grandma for answers

Feedback from friends

I got Zarer and Alda to try the website, and got a few feedback points from them. The overall feedback was that the website was not intuitive enough, and there was too much text. The explanation text could come at a later part of the website, and the process of inputting the image and reading could be more descriptive (visually) of the process of noise divination.

Feedback from Andreas

When I showed Andreas the website, he questioned the accuracy of the model. We tested a few images, and most of them were showing "My Sources Say No..." as the answer. I wondered why this happened because all the images that the model was trained on were truly random. I decided to consult Blackbox AI.

Question: My machine learning model is trained on 30 images of 1000x1000px, each pixel is randomly assigned a black or white color, creating a truly random imgae. There are 3 classes and each class is trained on 10 images. However, I seem to get the result "my sources say no" more than the other two. Why could that be?

It suggested some possibilities:

Class Imbalance: Even though you trained on 10 images per class, there might be some subtle imbalance in the data that's causing the model to favor one class over others.

Random Chance: Given that your images are truly random, it's possible that by chance, the "my sources say no" class had patterns that were more distinctive or easier for the model to learn.

Model Overfitting: With only 30 images total, there's a high risk of overfitting. The model might have latched onto specific patterns in the "my sources say no" images that aren't actually meaningful.

Threshold Effects: The model might be more confident in its "my sources say no" predictions, leading to more frequent selection of this class when applying classification thresholds.

Input Processing: If there's any preprocessing of the input images before classification that differs from the training process, it could inadvertently favor one class.

Model Architecture: The specific architecture of your model might be more prone to detecting features present in the "my sources say no" images.

Random Initialization: The initial random weights of the model could have led to a bias towards one class that persisted through training.

Testing Data Distribution: If you're testing on a small number of images, you might be seeing a temporary bias in results that would even out with more tests.



Overall, I think this was a great exercise into initial user testing and getting feedback for the website. I was able to identify a few problems that I could work on from here.

A few changes I intend of making are:
1. Overall, rethink the user experience and the amount of text it takes for them to understand the concept. I want to experiment with the experiment as a physical interactive experience, maybe with a camera.
2. Test the model again to see what the problem is, and why one result is more common than another.