Introduction
- In our system the face detection is performed on frames acquired from real time video.
- Then a face classification method that uses Convolutional Neural Network is integrated in the system.
- The system use standard images, and detection algorithm to train itself, extract faces, and detect the face.
Advertisement
Objectives of Face Detection
To implement a face net system to produce facial Detection application.
Advertisement
To design a system that can detect and recognize faces in real time.
Functional requirements of Face Detection
- Can detect multiple faces in a real-time video.
- Acquiring frames from real time video.
- Train the system module.
- Create classifier files of the data set.
- User friendly interface.
- Enter new data or check user.
- Robust and machine friendly.
- Enter user Id/Name.
- Collect Data Set.
- Train Data model.
- Create classifiers.
- Select user.
- Recognize face.
- Show Id/Name.
Non-Functional Requirements of Face Detection
- Robustness
- Stability
- Accuracy
- Security
Tools and Technologies used in Face Detection
Operating System : Linux (Ubuntu) 64 bit / Windows 64 bit
Hardware : 4 GB-RAM, Webcam
Programming Language : Python
Computer Vision Library : OpenCV
GUI Library: Tkinter
Algorithm : LBPH (Local Binary Pattern Histogram)
Modules of Face Detection
Our system have following Modules.
- App_GUI: Runs the application of user interface.
- Collect data set: Collect frames from runtime video.
- Trainer: Train the dataset and create classifiers.
- Face Recognizer: By using data classifiers recognizes the person in real-time and displays the Id.
Code
create_classifier.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
import cv2 import numpy as np from PIL import Image import os def train_classifer(name): path = os.path.join(os.getcwd()+"/data/"+name+"/") faces = [] ids = [] labels = [] pictures = {} for root,dirs,files in os.walk(path): pictures = files for pic in pictures : imgpath = path+pic img = Image.open(imgpath).convert('L') imageNp = np.array(img, 'uint8') id = int(pic.split(name)[0]) faces.append(imageNp) ids.append(id) ids = np.array(ids) clf = cv2.face.LBPHFaceRecognizer_create() clf.train(faces, ids) clf.write("./data/classifiers/"+name+"_classifier.xml") |
create_dataset.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
import cv2 import os def start_capture(name): path = "./data/" + name num_of_images = 0 detector = cv2.CascadeClassifier("./data/haarcascade_frontalface_default.xml") try: os.makedirs(path) except: print('Directory Already Created') vid = cv2.VideoCapture(0) while True: ret, img = vid.read() new_img = None grayimg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) face = detector.detectMultiScale(image=grayimg, scaleFactor=1.1, minNeighbors=5) for x, y, w, h in face: cv2.rectangle(img, (x, y), (x+w, y+h), (0, 0, 0), 2) cv2.putText(img, "Face Detected", (x, y-5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255)) cv2.putText(img, str(str(num_of_images)+" images captured"), (x, y+h+20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255)) new_img = img[y:y+h, x:x+w] cv2.imshow("FaceDetection", img) key = cv2.waitKey(1) & 0xFF try : cv2.imwrite(str(path+"/"+str(num_of_images)+name+".jpg"), new_img) num_of_images += 1 except : pass if key == ord("q") or key == 27 or num_of_images > 310: break cv2.destroyAllWindows() return num_of_images |
Detector.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
import cv2 from time import sleep from PIL import Image def main_app(name): face_cascade = cv2.CascadeClassifier('./data/haarcascade_frontalface_default.xml') recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read(f"./data/classifiers/{name}_classifier.xml") cap = cv2.VideoCapture(0) pred = 0 while True: ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray,1.3,5) for (x,y,w,h) in faces: roi_gray = gray[y:y+h,x:x+w] id,confidence = recognizer.predict(roi_gray) confidence = 100 - int(confidence) pred = 0 if confidence > 50: pred += +1 text = name.upper() font = cv2.FONT_HERSHEY_PLAIN frame = cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) frame = cv2.putText(frame, text, (x, y-4), font, 1, (0, 255, 0), 1, cv2.LINE_AA) else: pred += -1 text = "UnknownFace" font = cv2.FONT_HERSHEY_PLAIN frame = cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2) frame = cv2.putText(frame, text, (x, y-4), font, 1, (0, 0,255), 1, cv2.LINE_AA) cv2.imshow("image", frame) if cv2.waitKey(20) & 0xFF == ord('q'): print(pred) if pred > 0 : dim =(124,124) img = cv2.imread(f".\\data\\{name}\\{pred}{name}.jpg", cv2.IMREAD_UNCHANGED) resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) cv2.imwrite(f".\\data\\{name}\\50{name}.jpg", resized) Image1 = Image.open(f".\\2.png") Image1copy = Image1.copy() Image2 = Image.open(f".\\data\\{name}\\50{name}.jpg") Image2copy = Image2.copy() Image1copy.paste(Image2copy, (195, 114)) Image1copy.save("end.png") frame = cv2.imread("end.png", 1) cv2.imshow("Result",frame) cv2.waitKey(5000) break cap.release() cv2.destroyAllWindows() |