Learn practical skills, build real-world projects, and advance your career

#Classifying images of Sign Language Gestures using a neural network

Sign Language is a communication language just like any other language which is used among deaf community. The aim of this project is to create a model that can classify images of sign language gestures. For this project I used a Sign Language Gesture Images Dataset from kaggle.

The dataset consists of 37 different hand sign gestures which includes A-Z alphabet gestures, 0-9 number gestures and also a gesture for space which means how the deaf or dumb people represent space between two letter or two words while communicating. The dataset has two parts, that is two folders (1)-Gesture Image Data - which consists of the colored images of the hands for different gestures and (2)-Gesture Image Pre-Processed Data which has the same number of folders and same number of images.

!pip install jovian --upgrade -q
pip install opendatasets --upgrade -q
import os
import torch
import torchvision
import torch.nn as nn
import numpy as np
import opendatasets as od
import torch.nn.functional as F
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
import torchvision.transforms as tt
from torch.utils.data import random_split
from torchvision.utils import make_grid
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
project_name='04-sign-language-gesture-cnn'