Browser-Based Brains: Spotting Specific Magnets with JavaScript

Have you ever seen a piece of technology and wondered, “How does it do that?” We just built a pretty powerful example: a smart little web page that can look at an image and tell you what kind of magnet it sees. Specifically, it’s been trained to spot Spherical Rare Earth Magnets, or SREMs. It can tell if an object is a SREM, isn’t one, or is just something nearby.

This isn’t just a clever bit of code. It’s a practical peek into the powerful and surprisingly accessible world of in-browser Machine Learning (ML). Let’s pull back the curtain and see how it all works.

Preview this model live

Output

The Magic is Machine Learning

At its heart, this demo is powered by ML. In a nutshell, Machine Learning is a way of teaching a computer to find patterns on its own. Instead of writing hard rules like, “If a pixel is silver and perfectly round, it might be a magnet,” we take a different path. We simply show the computer thousands of pictures, telling it each time, “This is a SREM,” “This is not a SREM,” or “This is near a SREM.”

Slowly but surely, the machine learning model begins to recognize the subtle visual patterns, pixels, and presentations that distinguish one class from another. It builds its own complex criteria, creating a kind of computational “intuition.” After sufficient training, it can make a fantastic forecast about new images it has never seen before.

The Toolkit: How TensorFlow.js Brings AI to the Browser

The secret sauce that makes this all possible right here on this web page is a fantastic JavaScript library called TensorFlow.js.

Traditionally, heavy-duty ML has been the job of powerful servers. You would send your data to a server, it would do the calculations, and it would send the result back. TensorFlow.js changes the game by allowing these complex computations to happen directly within your web browser.

Here’s how our demo leverages it:

  1. A Pre-Trained Model: The most important part is the “brain” of the operation, which is a pre-trained model. This consists of a few files, most notably a model.json file. You can think of this file as a complex blueprint of the model’s architecture, and it points to other files that contain the millions of tiny learned values, or “weights,” that represent its knowledge. Our script’s first job is to fetch these files.
  2. Running the Prediction: When you upload an image or enable your webcam, TensorFlow.js gets to work. It can take the image data directly from an HTML <img> tag or a <canvas> element. It then converts this visual data into a format it can understand, called a tensor.
  3. Instant Inference: This tensor is passed through the loaded model’s layers right on your device. The model performs millions of calculations, a process called “inference,” to arrive at a conclusion. Because this all happens locally, the process is incredibly fast. There is no waiting for a server to respond. It also means your images are completely private; they are never sent anywhere.
  4. Getting the Results: The model’s output is a simple list of probabilities for each class. For instance, it might come back with: Srem: 0.01, Non Srem: 0.02, Near Srem: 0.97. Our JavaScript code simply reads these values and uses them to update the colorful progress bars on the screen.

And that’s it! From providing an image to seeing the probabilities, the whole process is a smooth, self-contained loop that happens entirely on your machine.

Why This Is So Cool

This approach unlocks a whole world of possibilities for creating interactive, intelligent, and private web applications. From real-time gesture recognition to creative art projects that react to your face, running ML in the browser makes the web a more powerful and personal place. It’s a perfect pairing of pattern-finding power and front-end finesse.

Dig Deeper

This project just scratches the surface. If you’re curious and want to learn more about how to build amazing things with this technology, here are a few great places to start: