Professional Documents
Culture Documents
Lab Task 5
Lab Task 5
LAB TASK-5
• Code:
(i)
import cv2
import mediapipe as mp
from handtrackingmodule import HandDetector
import pyautogui as py
detector = HandDetector()
capture=cv2.VideoCapture(0)
while True:
success, img = capture.read()
lmlist,img = detector.lmlist(img)
if lmlist:
fingers,img= detector.fingersup(img,lmlist)
#print(fingers)
if (fingers==[0,0,0,0,0]):
py.scroll(-50)
elif(fingers==[1,1,1,1,1]):
py.scroll(50)
cv2.imshow("Video Feed",img)
key = cv2.waitKey(1)
if (key==27):
break
capture.release()
cv2.destroyAllWindows()
(ii)
port cv2
import mediapipe as mp
from handtrackingmodule import HandDetector
import numpy as np
detector = HandDetector()
capture=cv2.VideoCapture(0)
while True:
success, img = capture.read()
lmlist,img = detector.lmlist(img)
if lmlist:
fingers,img = detector.fingersup(img,lmlist,draw=False)
if (fingers==[1,1,0,0,0]):
lenght,img = detector.finddistance(4,8,img,lmlist)
vol = np.interp(lenght,(10,200),(minvol,maxvol))
volper = np.interp(lenght,(10,200),(0,100))
cv2.putText(img,str(int(volper))+"%",(100,100),cv2.F
ONT_HERSHEY_SIMPLEX,1,(0,0,225),3)
volume.SetMasterVolumeLevel(vol, None)
cv2.imshow("Video Feed",img)
key = cv2.waitKey(1)
if (key==27):
break
capture.release()
cv2.destroyAllWindows()
• Screenshots:
• Conclusion:
Hence, using mediapipe and PyAutoGUI we were able to create a machine learning hand
gesture detection model that was able to count the number of fingers raised as well as scroll
the screen upwards and downwards. It was also able to adjust the computer volume using
the thumb and the index finger. This way upon further improvements and modifications we
can create a virtual mouse which completely works on gestures and will eliminate the
physical strain of using an actual mouse.