Press "Enter" to skip to content

从零玩转人脸识别之RGB人脸活体检测

本站内容均来自兴趣收集,如不慎侵害的您的相关权益,请留言告知,我们将尽快删除.谢谢.

从零玩转RGB人脸活体检测

 

前言

 

本期教程人脸识别第三方平台为虹软科技,本文章讲解的是人脸识别RGB活体追踪技术,免费的功能很多可以自行搭配,希望在你看完本章课程有所收获。

 

 

ArcFace 离线SDK,包含人脸检测、性别检测、年龄检测、人脸识别、图像质量检测、RGB活体检测、IR活体检测等能力,初次使用时需联网激活,激活后即可在本地无网络环境下工作,可根据具体的业务需求结合人脸识别SDK灵活地进行应用层开发。

 

 

功能介绍

 

1. 人脸检测

 

对传入的图像数据进行人脸检测,返回人脸的边框以及朝向信息,可用于后续的人脸识别、特征提取、活体检测等操作;

支持IMAGE模式和VIDEO模式人脸检测。
支持单人脸、多人脸检测,最多支持检测人脸数为50。

2.人脸追踪

 

对来自于视频流中的图像数据,进行人脸检测,并对检测到的人脸进行持续跟踪。(我们是实时的所以就只能使用第三方操作,先不使用这个)

 

3.人脸特征提取

 

提取人脸特征信息,用于人脸的特征比对。

 

4.人脸属性检测

 

人脸属性,支持检测年龄、性别以及3D角度。

 

人脸3D角度:俯仰角(pitch), 横滚角(roll), 偏航角(yaw)。

 

 

5.活体检测

 

离线活体检测,静默式识别,在人脸识别过程中判断操作用户是否为真人,有效防御照片、视频、纸张等不同类型的作弊攻击,提高业务安全性,让人脸识别更安全、更快捷,体验更佳。支持单目RGB活体检测、双目(IR/RGB)活体检测,可满足各类人脸识别终端产品活体检测应用。

 

开造

 

访问地址: https://ai.arcsoft.com.cn/technology/faceTracking.html 进入开发者中心进行注册以及认证个人信息

 

1. 点击我的应用 > 新建应用

 

 

2.填写信息立即创建 添加SDK

 

 

3.选中免费版人脸识别

 

 

4. 填写授权码信息

 

选择平台先选择windows的根据你的电脑配置来 是64位还是32位的, 语言选择Java

 

 

 

5. 介绍sdk文件

 

 

一、创建Springboot工程:ArcFace

 

1. maven依赖

 

<dependencies>
        <dependency>
            <groupid>org.springframework.boot</groupid>
            <artifactid>spring-boot-starter-web</artifactid>
        </dependency>
        <dependency>
            <groupid>org.springframework.boot</groupid>
            <artifactid>spring-boot-configuration-processor</artifactid>
            <optional>true</optional>
        </dependency>
        
        <dependency>
            <groupid>org.springframework.boot</groupid>
            <artifactid>spring-boot-starter-test</artifactid>
            <scope>test</scope>
        </dependency>
        <!--支持html-->
        <dependency>
            <groupid>org.springframework.boot</groupid>
            <artifactid>spring-boot-starter-thymeleaf</artifactid>
        </dependency>
<!--虹软sdk-->
        <dependency>
            <groupid>com.arcsoft.face</groupid>
            <artifactid>arcsoft-sdk-face</artifactid>
            <version>3.0.0.0</version>
            <scope>system</scope>
            <systempath>${basedir}/lib/arcsoft-sdk-face-3.0.0.0.jar</systempath>
        </dependency>
    </dependencies>

 

2.创建lib文件夹将sdk复制

 

进来记得add依赖有小箭头就行

 

 

3.复制到测试类当中

 

 

 

4.填写好对应的appId和sdkKey

 

 

5.复制算法库路径

 

 

 

6.启动测试

 

我进行删除了一些功能就示范特征、活体检测, 其他的可自己试一试

 

 

体验到此结束,可以自己多玩玩

 

二、改造ArcFace工程

 

效果图

 

 

 

1. 创建FaceRecognitionUtils

 

package top.yangbuyi.utils;
import com.arcsoft.face.*;
import com.arcsoft.face.enums.*;
import com.arcsoft.face.toolkit.ImageFactory;
import com.arcsoft.face.toolkit.ImageInfo;
import com.arcsoft.face.toolkit.ImageInfoEx;
import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.util.StringUtils;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class FaceRecognitionUtils {
@Value("crm.appId")
private static String APP_ID = "";
@Value("crm.sdk")
private static String SDK_KEY = "";
@Value("crm.face")
private static String FACE_ENGINE = "WIN64";
private final static Logger logger = LogManager.getLogger(FaceRecognitionUtils.class.getName());

private static FaceEngine faceEngine = new FaceEngine(FACE_ENGINE);

private static FunctionConfiguration functionConfiguration1 = new FunctionConfiguration();

private static FunctionConfiguration functionConfiguration2 = new FunctionConfiguration();
static {

{

functionConfiguration1.setSupportAge(true);

functionConfiguration1.setSupportFace3dAngle(true);

functionConfiguration1.setSupportFaceDetect(true);

functionConfiguration1.setSupportFaceRecognition(true);

functionConfiguration1.setSupportGender(true);

functionConfiguration1.setSupportLiveness(true);

functionConfiguration1.setSupportIRLiveness(true);
}

{

functionConfiguration2.setSupportAge(true);

functionConfiguration2.setSupportFace3dAngle(true);

functionConfiguration2.setSupportGender(true);

functionConfiguration2.setSupportLiveness(true);
}
}
public static void sdkActivation() {
int errorCode = faceEngine.activeOnline(APP_ID, SDK_KEY);
if (errorCode != ErrorInfo.MOK.getValue() && errorCode != ErrorInfo.MERR_ASF_ALREADY_ACTIVATED.getValue()) {

logger.error("在线激活SDK失败!错误码:" + errorCode);
} else {

logger.info("在线激活SDK成功!");
}
}
public static void InitializeTheEngine(DetectMode detectMode, DetectOrient detectOrient, int faceMaxNum, int faceScaleVal) {

EngineConfiguration engineConfiguration = new EngineConfiguration();

engineConfiguration.setDetectMode(detectMode);

engineConfiguration.setDetectFaceOrientPriority(detectOrient);

engineConfiguration.setDetectFaceMaxNum(faceMaxNum);

engineConfiguration.setDetectFaceScaleVal(faceScaleVal);

engineConfiguration.setFunctionConfiguration(functionConfiguration1);



int errorCode = faceEngine.init(engineConfiguration);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("引擎初始化失败!错误码:" + errorCode);
} else {

logger.info("引擎初始化成功!");
}
}
public static boolean faceDetection1(ImageInfo imageInfo, List<faceinfo> faceInfoList) {
int errorCode = faceEngine.detectFaces(imageInfo.getImageData(),
imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(),
faceInfoList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("人脸检测成功!");
return true;
}
}
public static boolean faceDetection2(ImageInfo imageInfo, List<faceinfo> faceInfoList) {
ImageInfoEx imageInfoEx = new ImageInfoEx();
imageInfoEx.setHeight(imageInfo.getHeight());
imageInfoEx.setWidth(imageInfo.getWidth());
imageInfoEx.setImageFormat(imageInfo.getImageFormat());
imageInfoEx.setImageDataPlanes(new byte[][]{imageInfo.getImageData()});
imageInfoEx.setImageStrides(new int[]{imageInfo.getWidth() * 3});
int errorCode = faceEngine.detectFaces(imageInfoEx,
DetectModel.ASF_DETECT_MODEL_RGB, faceInfoList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("人脸检测成功!");
return true;
}
}
public static byte[] extractFaceFeature(ImageInfo imageInfo) {
try {

List<faceinfo> faceInfoList = new ArrayList<faceinfo>();

int i = faceEngine.detectFaces(imageInfo.getImageData(), imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(), faceInfoList);
if (faceInfoList.size() > 0) {
FaceFeature faceFeature = new FaceFeature();

faceEngine.extractFaceFeature(imageInfo.getImageData(), imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(), faceInfoList.get(0), faceFeature);
return faceFeature.getFeatureData();
}
} catch (Exception e) {
logger.error("", e);
}
return null;
}
public static FaceFeature faceFeatureExtraction1(ImageInfo imageInfo, FaceInfo faceInfo) {
FaceFeature faceFeature = new FaceFeature();
int errorCode = faceEngine.extractFaceFeature(imageInfo.getImageData(),
imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(),
faceInfo, faceFeature);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸特征提取失败!错误码:" + errorCode);
return null;
} else {

logger.info("人脸特征提取成功!");
return faceFeature;
}
}
public static FaceFeature faceFeatureExtraction2(ImageInfo imageInfo, FaceInfo faceInfo) {
ImageInfoEx imageInfoEx = new ImageInfoEx();
imageInfoEx.setHeight(imageInfo.getHeight());
imageInfoEx.setWidth(imageInfo.getWidth());
imageInfoEx.setImageFormat(imageInfo.getImageFormat());
imageInfoEx.setImageDataPlanes(new byte[][]{imageInfo.getImageData()});
imageInfoEx.setImageStrides(new int[]{imageInfo.getWidth() * 3});

FaceFeature faceFeature = new FaceFeature();
int errorCode = faceEngine.extractFaceFeature(imageInfoEx, faceInfo, faceFeature);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸特征提取失败!错误码:" + errorCode);
return null;
} else {

logger.info("人脸特征提取成功!");
return faceFeature;
}
}
public static Float faceFeatureComparison(FaceFeature targetFaceFeature, FaceFeature sourceFaceFeature, CompareModel compareModel) {

FaceSimilar faceSimilar = new FaceSimilar();
int errorCode = faceEngine.compareFaceFeature(targetFaceFeature, sourceFaceFeature, compareModel, faceSimilar);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸特征比对失败!错误码:" + errorCode);
return null;
} else {

logger.info("人脸特征比对成功!");
return faceSimilar.getScore();
}
}
public static Float faceFeatureComparison(FaceFeature targetFaceFeature, FaceFeature sourceFaceFeature) {

FaceSimilar faceSimilar = new FaceSimilar();
int errorCode = faceEngine.compareFaceFeature(targetFaceFeature, sourceFaceFeature,
faceSimilar);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸特征比对失败!错误码:" + errorCode);
return null;
} else {

logger.info("人脸特征比对成功!");
return faceSimilar.getScore();
}
}
public static boolean faceAttributeDetection1(ImageInfo imageInfo, List<faceinfo> faceInfoList) {
int errorCode = faceEngine.process(imageInfo.getImageData(), imageInfo.getWidth(),
imageInfo.getHeight(), imageInfo.getImageFormat(), faceInfoList, functionConfiguration2);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸属性检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("人脸属性检测成功!");
return true;
}
}
public static boolean faceAttributeDetection2(ImageInfo imageInfo, List<faceinfo> faceInfoList) {
ImageInfoEx imageInfoEx = new ImageInfoEx();
imageInfoEx.setHeight(imageInfo.getHeight());
imageInfoEx.setWidth(imageInfo.getWidth());
imageInfoEx.setImageFormat(imageInfo.getImageFormat());
imageInfoEx.setImageDataPlanes(new byte[][]{imageInfo.getImageData()});
imageInfoEx.setImageStrides(new int[]{imageInfo.getWidth() * 3});
int errorCode = faceEngine.process(imageInfoEx, faceInfoList,
functionConfiguration2);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("人脸属性检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("人脸属性检测成功!");
return true;
}
}
public static boolean getAgeInfo(List<ageinfo> ageInfoList) {
int errorCode = faceEngine.getAge(ageInfoList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("获取年龄信息失败!错误码:" + errorCode);
return false;
} else {

logger.info("已成功获取年龄信息!");
return true;
}
}
public static boolean getGender(List<genderinfo> genderInfoList) {

int errorCode = faceEngine.getGender(genderInfoList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("获取性别失败!错误码:" + errorCode);
return false;
} else {

logger.info("已成功获取性别!");
return true;
}
}
public static boolean getFace3DAngle(List<face3dangle> face3DAngleList) {

int errorCode = faceEngine.getFace3DAngle(face3DAngleList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("获取人脸三维角度信息失败!错误码:" + errorCode);
return false;
} else {

logger.info("已成功获取人脸三维角度信息!");
return true;
}
}
public static boolean getLiveness(List<livenessinfo> livenessInfoList) {

int errorCode = faceEngine.getLiveness(livenessInfoList);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("获取RGB活体信息失败!错误码:" + errorCode);
return false;
} else {

logger.info("已成功获取RGB活体信息!");
return true;
}
}
private static String base64Process(String base64Str) {
if (!StringUtils.isEmpty(base64Str)) {
String photoBase64 = base64Str.substring(0, 30).toLowerCase();
int indexOf = photoBase64.indexOf("base64,");
if (indexOf > 0) {
base64Str = base64Str.substring(indexOf + 7);
}
return base64Str;
} else {
return "";
}
}
public static boolean detectionLiveness_IR1(String string) throws IOException {


byte[] decode = Base64.decode(base64Process(string));
BufferedImage bufImage = ImageIO.read(new ByteArrayInputStream(decode));
ImageInfo imageInfoGray = ImageFactory.bufferedImage2ImageInfo(bufImage);

List<faceinfo> faceInfoListGray = new ArrayList<faceinfo>();

int errorCode1 = faceEngine.detectFaces(imageInfoGray.getImageData(),
imageInfoGray.getWidth(), imageInfoGray.getHeight(),
imageInfoGray.getImageFormat(), faceInfoListGray);

FunctionConfiguration configuration = new FunctionConfiguration();

configuration.setSupportIRLiveness(true);

int errorCode2 = faceEngine.processIr(imageInfoGray.getImageData(),
imageInfoGray.getWidth(), imageInfoGray.getHeight(),
imageInfoGray.getImageFormat(), faceInfoListGray, configuration);
if (errorCode1 != ErrorInfo.MOK.getValue() || errorCode2 != ErrorInfo.MOK.getValue()) {
String errorCode = errorCode1 == 0 ? errorCode2 + "" : errorCode1 + "";

logger.error("IR活体检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("IR活体检测成功!");
return true;
}
}
public static boolean detectionLiveness_IR2(ImageInfo imageInfo) {
ImageInfoEx imageInfoEx = new ImageInfoEx();
imageInfoEx.setHeight(imageInfo.getHeight());
imageInfoEx.setWidth(imageInfo.getWidth());
imageInfoEx.setImageFormat(imageInfo.getImageFormat());
imageInfoEx.setImageDataPlanes(new byte[][]{imageInfo.getImageData()});
imageInfoEx.setImageStrides(new int[]{imageInfo.getWidth() * 3});
List<faceinfo> faceInfoList1 = new ArrayList<>();
int errorCode1 = faceEngine.detectFaces(imageInfoEx,
DetectModel.ASF_DETECT_MODEL_RGB, faceInfoList1);
FunctionConfiguration fun = new FunctionConfiguration();
fun.setSupportAge(true);
int errorCode2 = faceEngine.processIr(imageInfoEx, faceInfoList1,
fun);
if (errorCode1 != ErrorInfo.MOK.getValue() || errorCode2 != ErrorInfo.MOK.getValue()) {
String errorCode = errorCode1 == 0 ? errorCode2 + "" : errorCode1 + "";

logger.error("IR活体检测失败!错误码:" + errorCode);
return false;
} else {

logger.info("IR活体检测成功!");
return true;
}
}
public static boolean getIrLiveness(List<irlivenessinfo> irLivenessInfo) {

int errorCode = faceEngine.getLivenessIr(irLivenessInfo);
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("获取IR活体信息失败!错误码:" + errorCode);
return false;
} else {

logger.info("已成功获取IR活体信息!");
return true;
}
}
public static void destroyTheSDKEngine() {
int errorCode = faceEngine.unInit();
if (errorCode != ErrorInfo.MOK.getValue()) {

logger.error("销毁SDK引擎失败!错误码:" + errorCode);
} else {

logger.info("销毁SDK引擎成功!");
}
}
}

 

2.创建ErrorCodeEnum

 

package top.yangbuyi.constant;
public enum ErrorCodeEnum {
MOK(0, ""),
UNKNOWN(1, "未知错误"),
INVALID_PARAM(2, "无效参数"),
UNSUPPORTED(3, "引擎不支持"),
NO_MEMORY(4, "内存不足"),
BAD_STATE(5, "状态错误"),
USER_CANCEL(6, "用户取消相关操作"),
EXPIRED(7, "操作时间过期"),
USER_PAUSE(8, "用户暂停操作"),
BUFFER_OVERFLOW(9, "缓冲上溢"),
BUFFER_UNDERFLOW(10, "缓冲下溢"),
NO_DISKSPACE(11, "存贮空间不足"),
COMPONENT_NOT_EXIST(12, "组件不存在"),
GLOBAL_DATA_NOT_EXIST(13, "全局数据不存在"),
NO_FACE_DETECTED(14, "未检出到人脸"),
FACE_DOES_NOT_MATCH(15, "人脸不匹配"),
INVALID_APP_ID(28673, "无效的AppId"),
INVALID_SDK_ID(28674, "无效的SdkKey"),
INVALID_ID_PAIR(28675, "AppId和SdkKey不匹配"),
MISMATCH_ID_AND_SDK(28676, "SdkKey 和使用的SDK 不匹配"),
SYSTEM_VERSION_UNSUPPORTED(28677, "系统版本不被当前SDK所支持"),
LICENCE_EXPIRED(28678, "SDK有效期过期,需要重新下载更新"),
APS_ENGINE_HANDLE(69633, "引擎句柄非法"),
APS_MEMMGR_HANDLE(69634, "内存句柄非法"),
APS_DEVICEID_INVALID(69635, " Device ID 非法"),
APS_DEVICEID_UNSUPPORTED(69636, "Device ID 不支持"),
APS_MODEL_HANDLE(69637, "模板数据指针非法"),
APS_MODEL_SIZE(69638, "模板数据长度非法"),
APS_IMAGE_HANDLE(69639, "图像结构体指针非法"),
APS_IMAGE_FORMAT_UNSUPPORTED(69640, "图像格式不支持"),
APS_IMAGE_PARAM(69641, "图像参数非法"),
APS_IMAGE_SIZE(69642, "图像尺寸大小超过支持范围"),
APS_DEVICE_AVX2_UNSUPPORTED(69643, "处理器不支持AVX2指令"),
FR_INVALID_MEMORY_INFO(73729, "无效的输入内存"),
FR_INVALID_IMAGE_INFO(73730, "无效的输入图像参数"),
FR_INVALID_FACE_INFO(73731, "无效的脸部信息"),
FR_NO_GPU_AVAILABLE(73732, "当前设备无GPU可用"),
FR_MISMATCHED_FEATURE_LEVEL(73733, "待比较的两个人脸特征的版本不一致"),
FACEFEATURE_UNKNOWN(81921, "人脸特征检测错误未知"),
FACEFEATURE_MEMORY(81922, "人脸特征检测内存错误"),
FACEFEATURE_INVALID_FORMAT(81923, "人脸特征检测格式错误"),
FACEFEATURE_INVALID_PARAM(81924, "人脸特征检测参数错误"),
FACEFEATURE_LOW_CONFIDENCE_LEVEL(81925, "人脸特征检测结果置信度低"),
ASF_EX_BASE_FEATURE_UNSUPPORTED_ON_INIT(86017, "Engine不支持的检测属性"),
ASF_EX_BASE_FEATURE_UNINITED(86018, "需要检测的属性未初始化"),
ASF_EX_BASE_FEATURE_UNPROCESSED(86019, "待获取的属性未在process中处理过"),
ASF_EX_BASE_FEATURE_UNSUPPORTED_ON_PROCESS(86020, "PROCESS不支持的检测属性,例如FR,有自己独立的处理函数"),
ASF_EX_BASE_INVALID_IMAGE_INFO(86021, "无效的输入图像"),
ASF_EX_BASE_INVALID_FACE_INFO(86022, "无效的脸部信息"),
ASF_BASE_ACTIVATION_FAIL(90113, "人脸比对SDK激活失败,请打开读写权限"),
ASF_BASE_ALREADY_ACTIVATED(90114, "人脸比对SDK已激活"),
ASF_BASE_NOT_ACTIVATED(90115, "人脸比对SDK未激活"),
ASF_BASE_SCALE_NOT_SUPPORT(90116, "detectFaceScaleVal 不支持"),
ASF_BASE_VERION_MISMATCH(90117, "SDK版本不匹配"),
ASF_BASE_DEVICE_MISMATCH(90118, "设备不匹配"),
ASF_BASE_UNIQUE_IDENTIFIER_MISMATCH(90119, "唯一标识不匹配"),
ASF_BASE_PARAM_NULL(90120, "参数为空"),
ASF_BASE_SDK_EXPIRED(90121, "SDK已过期"),
ASF_BASE_VERSION_NOT_SUPPORT(90122, "版本不支持"),
ASF_BASE_SIGN_ERROR(90123, "签名错误"),
ASF_BASE_DATABASE_ERROR(90124, "数据库插入错误"),
ASF_BASE_UNIQUE_CHECKOUT_FAIL(90125, "唯一标识符校验失败"),
ASF_BASE_COLOR_SPACE_NOT_SUPPORT(90126, "输入的颜色空间不支持"),
ASF_BASE_IMAGE_WIDTH_NOT_SUPPORT(90127, "输入图像的byte数据长度不正确"),
ASF_NETWORK_BASE_COULDNT_RESOLVE_HOST(94209, "无法解析主机地址"),
ASF_NETWORK_BASE_COULDNT_CONNECT_SERVER(94210, "无法连接服务器"),
ASF_NETWORK_BASE_CONNECT_TIMEOUT(94211, "网络连接超时"),
ASF_NETWORK_BASE_UNKNOWN_ERROR(94212, "未知错误");
private Integer code;
private String description;
ErrorCodeEnum(Integer code, String description) {
this.code = code;
this.description = description;
}
public Integer getCode() {
return code;
}
public void setCode(Integer code) {
this.code = code;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public static ErrorCodeEnum getDescriptionByCode(Integer code) {
for (ErrorCodeEnum errorCodeEnum : ErrorCodeEnum.values()) {
if (code.equals(errorCodeEnum.getCode())) {
return errorCodeEnum;
}
}
return ErrorCodeEnum.UNKNOWN;
}
}

 

3.创建ArcFaceController

 

package top.yangbuyi.controller;
import com.arcsoft.face.*;
import com.arcsoft.face.enums.DetectMode;
import com.arcsoft.face.enums.DetectOrient;
import com.arcsoft.face.toolkit.ImageFactory;
import com.arcsoft.face.toolkit.ImageInfo;
import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.StringUtils;
import org.springframework.web.bind.annotation.*;
import top.yangbuyi.constant.ErrorCodeEnum;
import top.yangbuyi.utils.FaceRecognitionUtils;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@RestController
@Slf4j
@RequestMapping("arcFace")
public class ArcFaceController {
@RequestMapping(value = "arcFaceSearch", method = RequestMethod.POST)
public Map arcFaceSearch(@RequestParam String url, @RequestParam Integer oId, @RequestParam Integer uid) {

String urlTemp = url;

final HashMap<string, object=""> stringObjectHashMap = new HashMap<>(14);
stringObjectHashMap.put("success", false);

FaceRecognitionUtils.InitializeTheEngine(DetectMode.ASF_DETECT_MODE_IMAGE, DetectOrient.ASF_OP_0_ONLY, 10, 32);
if (!StringUtils.isEmpty(url)) {
String photoBase64 = url.substring(0, 30).toLowerCase();
int indexOf = photoBase64.indexOf("base64,");
if (indexOf > 0) {
url = url.substring(indexOf + 7);
}

byte[] decode = Base64.decode(url);
BufferedImage bufImage = null;
try {
bufImage = ImageIO.read(new ByteArrayInputStream(decode));
} catch (IOException e) {
e.printStackTrace();
return stringObjectHashMap;
}

ImageInfo imageInfo = ImageFactory.bufferedImage2ImageInfo(bufImage);

byte[] bytes = FaceRecognitionUtils.extractFaceFeature(imageInfo);

if (bytes == null) {
System.out.println(ErrorCodeEnum.NO_FACE_DETECTED.getDescription());
stringObjectHashMap.put("msg", ErrorCodeEnum.NO_FACE_DETECTED.getDescription());
return stringObjectHashMap;
}

List<faceinfo> faceInfoList1 = new ArrayList<>();

FaceRecognitionUtils.faceDetection1(imageInfo, faceInfoList1);

FaceRecognitionUtils.faceAttributeDetection1(imageInfo, faceInfoList1);

{

List<ageinfo> ageInfoList1 = new ArrayList<>();

FaceRecognitionUtils.getAgeInfo(ageInfoList1);

if (ageInfoList1.size() > 0) {
stringObjectHashMap.put("age", ageInfoList1.get(0).getAge());
}
}
// 创建图像中的人脸性别列表
List<genderinfo> genderInfoList1 = new ArrayList<>();

FaceRecognitionUtils.getGender(genderInfoList1);

if (genderInfoList1.size() > 0) {
stringObjectHashMap.put("gender", genderInfoList1.get(0).getGender() == 0 ? "男" : "女");
}
// 创建图像中的人脸三维角度信息列表
List<face3dangle> face3DAngleList1 = new ArrayList<>();

FaceRecognitionUtils.getFace3DAngle(face3DAngleList1);

if (face3DAngleList1.size() > 0) {
List<map<string, object="">> td = new ArrayList<>();
Map<string, object=""> map = new HashMap<>();
map.put("俯仰角", face3DAngleList1.get(0).getPitch());
map.put("横滚角", face3DAngleList1.get(0).getRoll());
map.put("偏航角", face3DAngleList1.get(0).getYaw());
td.add(map);
stringObjectHashMap.put("ThreeDimensional", td);
}
// 创建图像中的RGB活体信息列表
List<livenessinfo> livenessInfoList1 = new ArrayList<>();

FaceRecognitionUtils.getLiveness(livenessInfoList1);

if (livenessInfoList1.size() > 0) {
stringObjectHashMap.put("RgbLiveness", livenessInfoList1.get(0).getLiveness());
}
if (livenessInfoList1.size() > 0 && livenessInfoList1.get(0).getLiveness() == 1) {
stringObjectHashMap.put("success", true);
stringObjectHashMap.put("baseUrl", urlTemp);
}
} else {
stringObjectHashMap.put("data", "url,不允许为空");
}
return stringObjectHashMap;
}
}

 

4. 创建路由跳转前端页面 RouteController

 

import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
@Controller
public class RouteController {
@GetMapping("/")
public String yby() {

    return "index";
}
}

 

三. 前端人脸追踪插件

 

访问地址: https://trackingjs.com/

 

里面有demo可观看我就不带大家查看了

 

 

1. 创建前端index.html

 

js请下载demo获取,连接在最下面

 

<meta charset="UTF-8">
<title>人脸检测</title>
<script src="jquery-3.3.1.min.js"></script>
<script src="tracking.js"></script>
<script src="face-min.js"></script>
<script src="training/Landmarks.js"></script>
<script src="training/Regressor.js"></script>
<script src="stats.min.js"></script>
<style>
        #regcoDiv {
            width: 100%;
            height: 530px;
            position: relative;
            background: #eee;
            overflow: hidden;
            border-bottom-right-radius: 10px;
            border-bottom-left-radius: 10px;
                                                                    }
        video, canvas {
            margin-left: 230px;
                        position: absolute;
        }
        .className {
            -webkit-animation: twinkling 1s infinite ease-in-out
        }
        .animated {
            -webkit-animation-duration: 1s;
            animation-duration: 1s;
            -webkit-animation-fill-mode: both;
            animation-fill-mode: both
        }
        @-webkit-keyframes twinkling {
            0% {
                background: #eee;
            }
            35% {
                background: #08e800;
            }
            56% {
                background: #1f25d4;
            }
            100% {
                background: #eee;
            }
        }
        @keyframes twinkling {
            0% {
                background: #eee;
            }
            35% {
                background: #08e800;
            }
            56% {
                background: #1f25d4;
            }
            100% {
                background: #eee;
            }
        }
</style>
<div>
</div>
<div>
<table frame="void">
<tbody><tr>
<td>
<button title="人脸识别" value="人脸识别" onclick="getMedia2()">
摄像头识别
</button>
</td>
</tr>
<tr>
<td colspan="2">
<button onclick="chooseFileChangeComp()">
提交
</button>
</td>
</tr>
</tbody></table>
</div>
<div>
<img src="">
</div>
<script>
    getMedia2()
    $("#imageDivComp").click(function () {
        $("#chooseFileComp").click();
    });
    var t1;
        function getMedia2() {
        $("#regcoDiv").empty();
        let vedioComp = "<video id='video2' width='500px' height='500px'  autoplay='autoplay' playsinline webkit-playsinline='true' ></video><canvas id='canvas2' width='500px' height='500px'></canvas>";
        $("#regcoDiv").append(vedioComp);
        let constraints = {
            video: {width: 500, height: 500},
            audio: true
        };

        let video = document.getElementById("video2");





        let promise = navigator.mediaDevices.getUserMedia(constraints);
        promise.then(function (MediaStream) {
            video.srcObject = MediaStream;
            video.play();
        });
                t1 = window.setInterval(function () {
            chooseFileChangeComp()
        }, 3000)
    }
        function chooseFileChangeComp() {
        let regcoDivComp = $("#regcoDiv");
        if (regcoDivComp.has('video').length) {
            let video = document.getElementById("video2");
            let canvas = document.getElementById("canvas2");
            let ctx = canvas.getContext('2d');
            ctx.drawImage(video, 0, 0, 500, 500);
            var base64File = canvas.toDataURL();
            var formData = new FormData();
            formData.append("url", base64File);
            formData.append("oId", 1);
            formData.append("uid", 1);
            $.ajax({
                type: "post",
                url: "/arcFace/arcFaceSearch",
                data: formData,
                contentType: false,
                processData: false,
                async: false,
                success: function (text) {
                    var res = JSON.stringify(text)
                    if (text.success == true && text.RgbLiveness == 1) {
                        console.log(text);
                        clearInterval(t1);
                        console.log(text.baseUrl);
                    } else {
                        console.log(text);
                    }
                },
                error: function (error) {
     
                    alert(JSON.stringify(error))
                }
            });
        }
    }
        window.onload = function () {
        let video = document.getElementById("video2");
        let canvas = document.getElementById("canvas2");
        let context = canvas.getContext('2d');
        var tracker = new tracking.LandmarksTracker();
        tracker.setInitialScale(4);
        tracker.setStepSize(2);
        tracker.setEdgesDensity(0.1);
        tracking.track(video, tracker);
        tracker.on('track', function (event) {
            context.clearRect(0, 0, canvas.width, canvas.height);
            if (!event.data) return;

            event.data.faces.forEach(function (rect) {
                context.strokeStyle = '#eb4c4c';
                context.strokeRect(rect.x, rect.y, rect.width, rect.height);
                context.font = '16px Helvetica';
                context.fillStyle = "#000";
                context.lineWidth = '5';
                context.fillText('人脸横向: ' + rect.x + 'px', rect.x + rect.width + 5, rect.y + 11);
                context.fillText('人脸纵向: ' + rect.y + 'px', rect.x + rect.width + 5, rect.y + 50);
            });
                        event.data.landmarks.forEach(function (landmarks) {
                for (var l in landmarks) {
                    context.beginPath();
                    context.fillStyle = "#fff";
                    context.arc(landmarks[l][0], landmarks[l][1], 1, 0, 2 * Math.PI);
                    context.fill();
                }
            });
        });

        var gui = new dat.GUI();
        gui.add(tracker, 'edgesDensity', 0.1, 0.5).step(0.01).listen();
        gui.add(tracker, 'initialScale', 1.0, 10.0).step(0.1).listen();
        gui.add(tracker, 'stepSize', 1, 5).step(0.1).listen();
    };
</script>

 

6. 启动工程 访问 http://localhost:7000/

 

四. 人脸识别追踪就到这里啦,具体的代码已经提交到gitee请前往获取Java项目 ArcFace

 

点击前往获取demo

Be First to Comment

发表评论

您的电子邮箱地址不会被公开。 必填项已用*标注