Week 12

Jake's Consult and 3D Experiments


Pixel Oracle Website

Over the weekend, I was thinking about my pixel oracle experiment and felt there was more I could do with it. So I set myself a goal of materialising the system into a website.

v1: Wireframe + TeachableMachine Model

Wireframing + Interative TeachableMachine Model

I started with a brief instructional/explanation on the website. Followed by the interactive portion on the right, in which I had to upload my TeachableMachine model. Then I created a system that allowed the user to upload their own images to receive their reading.

Brief ASCII Rabbithole

I got a little distracted along the way, and thought about how to present the title text "pixel oracle". I wanted to see what it would look like as an ASCII text.

ASCII generation



I then tried placing it where the title text would be, and I couldn't get it to be in the right format on VSCode.



At some point, I accidentally pasted the ASCII into the google search bar, and thought it looked pretty cool as a single line. (In fact, what if our searches were visualised as blocks like these, instead of text, almost like morse code?)



So, I ended up using the ASCII art as a divider for the page.



This was a slight digression, but it just got me interested in thinking of the deconstruction of ASCII text, and how it could symbolically hold the meaning of the orginally text, even when pulled apart.

v2: Interactive Model Testing + Explanation

Website + Explanation

The final website is pretty much the same, but I included a area below that explained the concept of the system. I wondered if I was overexplaining but I decided to just go ahead first, and then do some user testing afterwards.

Asking a tree for answers

Asking grandma for answers

Feedback from friends

I got Zarer and Alda to try the website, and got a few feedback points from them. The overall feedback was that the website was not intuitive enough, and there was too much text. The explanation text could come at a later part of the website, and the process of inputting the image and reading could be more descriptive (visually) of the process of noise divination.

Feedback from Andreas

When I showed Andreas the website, he questioned the accuracy of the model. We tested a few images, and most of them were showing "My Sources Say No..." as the answer. I wondered why this happened because all the images that the model was trained on were truly random. I decided to consult Blackbox AI.

Question: My machine learning model is trained on 30 images of 1000x1000px, each pixel is randomly assigned a black or white color, creating a truly random imgae. There are 3 classes and each class is trained on 10 images. However, I seem to get the result "my sources say no" more than the other two. Why could that be?

It suggested some possibilities:

Class Imbalance: Even though you trained on 10 images per class, there might be some subtle imbalance in the data that's causing the model to favor one class over others.

Random Chance: Given that your images are truly random, it's possible that by chance, the "my sources say no" class had patterns that were more distinctive or easier for the model to learn.

Model Overfitting: With only 30 images total, there's a high risk of overfitting. The model might have latched onto specific patterns in the "my sources say no" images that aren't actually meaningful.

Threshold Effects: The model might be more confident in its "my sources say no" predictions, leading to more frequent selection of this class when applying classification thresholds.

Input Processing: If there's any preprocessing of the input images before classification that differs from the training process, it could inadvertently favor one class.

Model Architecture: The specific architecture of your model might be more prone to detecting features present in the "my sources say no" images.

Random Initialization: The initial random weights of the model could have led to a bias towards one class that persisted through training.

Testing Data Distribution: If you're testing on a small number of images, you might be seeing a temporary bias in results that would even out with more tests.



Overall, I think this was a great exercise into initial user testing and getting feedback for the website. I was able to identify a few problems that I could work on from here.

A few changes I intend of making are:
1. Overall, rethink the user experience and the amount of text it takes for them to understand the concept. I want to experiment with the experiment as a physical interactive experience, maybe with a camera.
2. Test the model again to see what the problem is, and why one result is more common than another.

Extension for Manifestation

I stumbled onto an article that talked about a trend of the Manifinsta.

Click to view article



I thought back to my writings about the ritual back in week 4. I believe this could be viewed as a modern "superstition" in which we get back what we put out into the universe (in this case, on the internet).

Some interesting quotes from the article:

“There’s a culture of self-deprecation online that I’ve started to reject”

"As young people continue to turn away from traditional organised religions, ideas and techniques like manifestation are ultimately getting picked up by Silicon Valley-type apps and companies."

"While posting that your job interview will go well or that your ex-boyfriend still definitely “needs you” to ten people are harmless, the premise that everything you consume and engage with online builds your future reality is terrifying."

This is yet another example of a well-meaning trend that finds its roots in our old divination practices. The article warns of the risks of such trends, as they still allow insidious advertisements to use the data we put up for its own gain. It seems only a matter of time before our personal manifestations are used by algorithms to sell us a product, or used to manipulate us in other ways.

This begs the question, is there a way we can create a self-contained manifestation portal that doesn't rely on the internet?


I was reminded of Maya Man's project 'Glance Back' that I found a while ago. The chrome extension gets the user to take a photo of themselves at random moments of the day and type a short caption of what they were doing at the moment.

The extension served as little interventions through the day to take the user out of the trance that we often fall into while browsing the internet. Man stressed the importance that the images were self-contained and saved to the local storage, making the experience one purely shared by the user and the computer.


SFPC_Malware_Anthology.zip

A "virus" for lost emails

I also remembered seeing a series of projects by School for Poetic Computation students in which they created a series of "viruses" as interventions. The idea of a virus infiltrating the experience of a user, but turned on its head for positive interventions was so genius to me, and I wanted to see if I could create my own version of that.

Light a Candle

Lighting a Candle in Temples

I thought of the act of lighting a candle in temples that I used to do with my family. I dont have a good picture so the stock photo on the left will do.

I got an idea to create an extension that served as a user's personal manifestation platform. One that was self-contained and did not require the user to post it on any online platform.


So I tried my hand at making an extension.



Asking a tree for answers