Skip to main content
Skip to main content

AI image recognition exploring limitations and bias

Integrating Digital Technologies
Years 5-6; 7-8

A hands-on activity to practise training and testing an artificial intelligence (AI) model, using cartoon faces, including a discussion about sources of potential algorithmic bias and how to respond to these sources.

Suggested steps

If you already have a teacher account and student logins fully set up on the website Machine Learning for Kids, please skip to Training the AI.

Training the AI

Follow the steps below to train an AI model to recognise cartoon faces wearing glasses or sunglasses.

  1. Log in at!/login. Students use the names and passwords already set up by the teacher.
  2. Go to Projects.
  3. Select Add a new project.
  4. Enter a project name, eg Algorithmic bias.
  5. For Recognising ..., choose images.
  6. Select the Create button.
  7. From the ‘Projects’ page, select the project you just made.
  8. Select Train.
  9. Add three labels: glasses, sunglasses or noGlasses. These are the buckets you'll be putting the images in.
  10. In a separate browser window, open this gallery of training faces (Cartoon set © Google LLC, CC BY 4.0).
  11. With the windows side-by-side, drag the images into the appropriate bucket: glasses, sunglasses, or noGlasses. You should end up with 13 or 14 images in each bucket.

    Note, dragging does not work in Microsoft Edge (as of September 2019).

    Placing face images from the training gallery into the correct label buckets.

    Placing face images from the training gallery into the correct label buckets.

  12. When done, select < Back to project.
  13. Select Learn & Test.
  14. Select Train new machine learning model. This may take from 10 to 15 minutes. The page will update when it's done.

    IMPORTANT NOTE – AUTOMATIC DELETION OF AI MODELS: By default, after 24 hours Machine Learning for Kids automatically deletes AI models trained by students. As a teacher, you can increase this time to as much as 1.5 weeks (but remember that there is a limited number of models that a class can have at any time). Student training data is not deleted, so models can always be retrained.

    Testing the AI

    Follow the steps below to test your AI model.

    1. Once training is done, a box will appear on the same page to Test with a web address for an image on the Internet. This gallery of test faces (Cartoon set © Google LLC, CC BY 4.0) has more diverse skin colours than in the training dataset. Right-click on one face and copy the image URL, then paste it into that box. Then press the button to test it.
    2. Now we’ll try out our model within a Scratch program. Select < Back to project.
    3. Select Make.
    4. Select Scratch 3. (You can also try Python.)
    5. Select Open in Scratch 3.
    6. To test an image, first change the sprite’s current costume to the face you want to test. Download one of the images from the test gallery, then upload it as a costume for the sprite.
    7. Code a program using the extra Machine Learning for Kids blocks and Images blocks. The screenshot below shows a sample program.

      IMPORTANT NOTE – SAVING PROGRAMS: To accommodate each AI model, a custom Scratch 3 environment is launched when you select Make. This is different from the regular Scratch 3 environment accessed at Programs cannot be shared or uploaded across the two different environments. This means several things:

      • Students cannot easily share their programs with the teacher or other students. Assessment evidence may need to be screenshots, video recordings or teacher observations.
      • Students who wish to keep a program at the end of a lesson must download it as a file. Later, they must re-enter the custom Scratch environment by clicking the Make button in their Machine Learning for Kids project. Then, they can upload the program file again. Also see the note 'Automatic deletion of models' at the end of Training the AI.

      Image: Sample screenshot showing a very simple program within a Scratch 3 custom environment using Machine Learning for Kids blocks and Images blocks. Click the image to expand it in a new window.


    • In both image sets, the faces are deliberately varied in size and placed in varied positions. Do you think this reflects a real-life scenario such as passport photo checking? Why / why not?
    • Do you think real faces would be easier or more difficult for the system than these cartoon faces? What sort of variations occur with real photographs?
    • Try testing your AI model with several images from the test gallery, which contains faces with more diverse skin colours than the images in the training gallery.
      • Did you notice any difference in the accuracy of the model’s system when faces with more diverse skin colours were tested? Did the system get it right and, if so, with how much confidence?
      • If you did find a discrepancy, what technical reasons could you give for why this occurred? (See Why is this relevant? below for possible reasons.)
    • When a computer system creates unfair outcomes, this is often referred to as algorithmic bias (go to Why is this relevant? for more information). If a digital solution has more difficulty distinguishing faces for particular ethnic groups, can you think of a real-world situation where this might cause unfair outcomes?
      • Hint: facial recognition technology is in some cases already being used to prove identity. Search on ‘facial recognition identity examples’.
    • Can you think of any proven real-world examples of algorithmic bias?
      • One past example is the Nikon camera controversy in 2009–10, when an algorithm designed to detect if photograph subjects were blinking mis-interpreted a number of Asian subjects as having their eyes closed.


    • The sample screenshot of the Scratch program in ‘Testing the AI’ merely displays the output from the machine learning model. Expand the program into an application – for example, a hypothetical passport checking system that rejects faces that do not meet minimum ‘confidence’ requirements.
    • For secondary students, try creating a Python program as an alternative to Scratch.
    • Build a gallery of faces (cartoon or real) and try making your own AI model.

      NOTE: For privacy reasons, it is recommended that photos do not include student faces or other personal identifiers.

    Why is this relevant

    Algorithmic bias creates errors that may lead to unfair or dangerous outcomes, for instance, for one or more groups of people, organisations, living things and the environment.

    Algorithmic bias is often unintentional. It can arise in several ways. Some examples:

    1. As a result of design. The software’s algorithms (the instructions and procedures used to make decisions), may have been coded based on incorrect assumptions, outdated understandings, prioritised motives or even technical limitations. Or the design may simply be misapplied, used for a purpose for which it was never intended.
      • In the case of our facial recognition example, such systems may have difficulty recognising the outlines of faces with dark skin colour against a dark background because of the algorithm’s dependence on distinguishing sufficient light contrast. The cartoons in our activity frequently use black lines for face outlines as well as for the outlines of glasses.
    2. Due to inadequate or biased data. Programmers have long been familiar with Garbage In, Garbage Out (GIGO), meaning that poor quality data used as input to a computer system will tend to result in poor quality output and decisions. Machine learning systems are trained from data sets of text, images or sounds, and these may be restricted or unrepresentative.
      • In the case of our facial recognition example, a system may be trained on an insufficiently diverse data set – for example, one based predominantly on faces of light skin colour or facial features associated with a limited range of ethnic groups.

    Watch this CNN video that further discusses some of the limitations of AI and examples of algorithmic bias.