作为目前最火的人脸识别,有了树莓派的我也想搞一下,查询后发现目前可行且开源的有 face_recognition 库,因为我是要实现实时人脸识别,所以结合了opencv+ face_recognition ,当然效果很可以,唯一的缺点是卡顿,当然我也会给出我研究的方法。

当然要使用 face_recognition 要安装opencv,dlib, face_recognition 库,对于树莓派opencv的安装我在之前博客已有介绍

下面链接是官方给出的安装方法,当然有几个注意事项(访问不了把ip改一下)https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65

mkdir -p dlib
git clone -b 'v19.6' --single-branch https://github.com/davisking/dlib.git dlib/
cd ./dlib
sudo python3 setup.py install --compiler-flags "-mfpu=neon"

以上代码的问题在于dlib不能安装19.6版本,尽量使19.7以上,我是最新版本,当然慢的话在gitee上都有备份

sudo apt-get install --no-install-recommends xserver-xorg xinit raspberrypi-ui-mods

千万不要按照文档安装PIXEL这个GUI,否则会灰屏卡死,然后重装

安装后可以根据官网案例来执行
https://github.com/ageitgey/face_recognition/blob/master/README_Simplified_Chinese.md (官网的中文文档)
因为我的树莓派使用的usb摄像头,官网的文档使用的使csi摄像头,所以我直接用python正常的代码就可以(也就是可以把在linux,mac,Windows运行的代码直接拿过来用)

import face_recognition
import cv2

# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
#   1. Process each video frame at 1/4 resolution (though still display it at full resolution)
#   2. Only detect faces in every other frame of video.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
Zhanyan_image = face_recognition.load_image_file("E:\\data\\dataset\\images\\test\\ZhangYan.jpg")
Zhanyan_face_encoding = face_recognition.face_encodings(Zhanyan_image)[0]

# Load a second sample picture and learn how to recognize it.
tongliya_image = face_recognition.load_image_file("E:\\data\\dataset\\images\\test\\TongLiYa.jpg")
tongliya_face_encoding = face_recognition.face_encodings(tongliya_image)[0]


uuu_image = face_recognition.load_image_file("E:\\data\\dataset\\images\\test\\WangYu.jpg")
uuu_face_encoding = face_recognition.face_encodings(uuu_image)[0]


# uuu_image =


# Create arrays of known face encodings and their names
known_face_encodings = [
    Zhanyan_face_encoding,
    tongliya_face_encoding,
    uuu_face_encoding
]
known_face_names = [
    "Zhang Yan",
    "Tong Liya",
    "Wang Yu"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True:
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # If a match was found in known_face_encodings, just use the first one.
            if True in matches:
                first_match_index = matches.index(True)
                name = known_face_names[first_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame


    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 255), 2)

        # Draw a label with a name below the face
        # cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 255, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 0, 255), 2)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

这个是我在树莓派运行的代码,训练集是由一张照片训练,但是准确率很不错,比之前opencv准确率高很多,但是问题随之而来,视频是一卡一卡的,接下来我会给出几种方法

import face_recognition
import cv2
import multiprocessing as mp
# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
#   1. Process each video frame at 1/4 resolution (though still display it at full resolution)
#   2. Only detect faces in every other frame of video.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)


# Load a sample picture and learn how to recognize it.
wxy_image = face_recognition.load_image_file("E:\\data\\dataset\\images\\test\\ZhangYan.jpg")
wxy_face_encoding = face_recognition.face_encodings(wxy_image)[0]

# Load a second sample picture and learn how to recognize it.
xx_image = face_recognition.load_image_file("E:\\data\\dataset\\images\\test\\TongLiYa.jpg")
xx_face_encoding = face_recognition.face_encodings(xx_image)[0]




# uuu_image =


# Create arrays of known face encodings and their names
known_face_encodings = [
    wxy_face_encoding,
    xx_face_encoding,
]
known_face_names = [
    "wxy",
    "xx",
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []


def faceanalyse(frame):
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
    rgb_small_frame = small_frame[:, :, ::-1]
    face_locations = face_recognition.face_locations(rgb_small_frame)
    face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

    face_names = []
    for face_encoding in face_encodings:
        # See if the face is a match for the known face(s)
        matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
        name = "Unknown"

        # If a match was found in known_face_encodings, just use the first one.
        if True in matches:
            first_match_index = matches.index(True)
            name = known_face_names[first_match_index]
            print(name)
        face_names.append(name)
    return  face_locations,face_names,frame
def drawmface(face_locations,face_names,frame):
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 255), 2)

        # Draw a label with a name below the face
        # cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 255, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 0, 255), 2)
    cv2.imshow('Video', frame)
    cv2.waitKey( 1 )

if __name__ == '__main__':
    video_capture = cv2.VideoCapture(0)
    while True:
        pool = mp.Pool(processes=4)
        fcount = 0
        ret, frame = video_capture.read()
        if fcount == 1:
            r1 = pool.apply_async(faceanalyse, [frame])
            f1, n1, i1 = r1.get()
            drawmface(f1, n1, i1)

        elif fcount == 2:
            r2 = pool.apply_async(faceanalyse, [frame])
            f2, n2, i2 = r2.get()
            drawmface(f2, n2, i2)

        elif fcount == 3:
            r3 = pool.apply_async(faceanalyse, [frame])
            f3, n3, i3 = r3.get()
            drawmface(f3, n3, i3)

        elif fcount == 4:
            r4 = pool.apply_async(faceanalyse, [frame])
            f4, n4, i4 = r4.get()
            drawmface(f4, n4, i4)

            fcount = 0

        fcount += 1
  • 树莓派超频处理(这个我没尝试,因为我发现超频处理后温度飙升到80°C,害怕烧坏,没敢搞)

现在3B以上已经不允许超频,然后必须从 sudo vim /boot/config.txt 改进,然后改进相关可以查询csdn,只不过温度特别高,建议不要这样

  • 更换【寻找人脸】方法,这里使用的HOG方法,这是face_recognitinon 里面自带的,虽然识别比较准确,但是速度相对较慢。所以这里替换成opencv自带的的HARR寻找人脸模型。 (当然我也没尝试,可能我的水平不够,但是这个好像是最可行的办法)

当然最后他的速度没有提高我也是很悲伤,只好到此为止,起码人脸识别目的已经实现,问题就是比较卡,这只是硬件问题了,比较满足,接下来准备尝试传感器相关的树莓派项目


浪子三唱,不唱悲歌