From Code to Click: The Rise of No-Code Image Classification
Source PublicationScientific Publication
Primary AuthorsZwilling, Astier, Abbou et al.

The Tailor and the Automated Loom
Imagine you need a bespoke suit. Traditionally, in the world of computer science, you could not simply buy one off the rack. You had to be the tailor. You needed to understand the texture of the cloth (data), how to thread the needle (Python), and the complex geometry of the stitch (algorithms). If you missed a single loop, the entire garment would autumn apart at the seams. This requirement has long kept experts in other fields—like doctors or biologists—locked out of the room. They know what the suit should look like, but they cannot sew.
This is where no-code image classification changes the dynamic. It replaces the needle and thread with an automated loom. You simply feed in the fabric, select a pattern on a touchscreen, and the machine handles the weaving. A recent paper introduces VisuelAIclassification, a piece of software designed to act as this automated loom for researchers who lack programming expertise.
How no-code image classification works
The software, built with a graphical interface called CustomTkinter, breaks down the formidable wall of machine learning into clickable steps. It operates on a simple logic flow. If you have a collection of images, the system first helps you organise them. It functions like a high-speed library sorting office.
Consider the mechanism of 'data augmentation'. In a standard coding environment, you would write a script to modify your images to create a larger dataset. Here, the software automates it. If you upload an image of a blood cell, the system can create copies that are rotated, flipped, or zoomed in. Then, the computer learns that a cell is still a cell, even if it is upside down. This strengthens the model's ability to recognise objects in the real world.
The process follows a strict 'If... then...' structure:
- If the user uploads a raw dataset, then the software scrubs the metadata to ensure privacy.
- If the 'Train' button is pressed, then the underlying algorithms begin pattern recognition without the user seeing a line of code.
- If the model is tested against new images, then it outputs a classification accuracy score immediately.
Bridging the Medical Gap
The stakes are high. In the medical sector, 71% of institutions report that a lack of technical skill prevents them from using AI. They have the data—thousands of X-rays or blood slides—but not the coding ability. The creators of this software demonstrated its utility using a hematology dataset, effectively sorting blood cells with the tool. While the study presents the software's capabilities on a specific dataset, it suggests a broader potential: democratising access to powerful vision tools. It allows a doctor to remain a doctor, rather than forcing them to become a part-time software engineer.