101 lines
2.3 KiB
Markdown
101 lines
2.3 KiB
Markdown
|
|
||
|
|
||
|
#### A. **Frontend: Capture Selfie**
|
||
|
|
||
|
The frontend will handle capturing the selfie from the user's webcam and sending it to the backend for analysis.
|
||
|
|
||
|
|
||
|
|
||
|
## Step 1: **Set Up Webcam Access:**
|
||
|
|
||
|
- Use the `getUserMedia()` API to access the webcam. This will allow users to capture their selfies in real-time.
|
||
|
|
||
|
```javascript
|
||
|
async function startWebcam() {
|
||
|
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
|
||
|
const video = document.querySelector('#videoElement');
|
||
|
video.srcObject = stream;
|
||
|
}
|
||
|
|
||
|
startWebcam();
|
||
|
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
Within the Flutter app:
|
||
|
|
||
|
```html
|
||
|
<video id="videoElement" autoplay></video>
|
||
|
<canvas id="canvasElement" style="display:none;"></canvas>
|
||
|
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
## Step 2: Capturing the Selfie program:
|
||
|
|
||
|
|
||
|
```javascript
|
||
|
function captureSelfie() {
|
||
|
const video = document.querySelector('#videoElement');
|
||
|
const canvas = document.querySelector('#canvasElement');
|
||
|
const context = canvas.getContext('2d');
|
||
|
const width = video.videoWidth;
|
||
|
const height = video.videoHeight;
|
||
|
|
||
|
canvas.width = width;
|
||
|
canvas.height = height;
|
||
|
context.drawImage(video, 0, 0, width, height);
|
||
|
|
||
|
return canvas.toDataURL('image/png'); // Returns the selfie as a base64-encoded image
|
||
|
}
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
|
||
|
## Step 3: Incorporating the ML Models for the Selfie
|
||
|
|
||
|
|
||
|
1. Incorporate into the HTML first
|
||
|
|
||
|
|
||
|
```html
|
||
|
<script src="https://unpkg.com/face-api.js"></script>
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
2. Then write it into your javascript program
|
||
|
|
||
|
|
||
|
**This is the part to actually research and improve on top of the current existing process**
|
||
|
|
||
|
Function for Loading the Model
|
||
|
```javascript
|
||
|
async function loadFaceAPI() {
|
||
|
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models');
|
||
|
await faceapi.nets.faceLandmark68Net.loadFromUri('/models');
|
||
|
await faceapi.nets.faceRecognitionNet.loadFromUri('/models');
|
||
|
}
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
Function for Comparing the Model to the Captured Selfie:
|
||
|
```javascript
|
||
|
async function detectFace(image) {
|
||
|
const detections = await faceapi.detectSingleFace(image)
|
||
|
.withFaceLandmarks()
|
||
|
.withFaceDescriptor();
|
||
|
return detections;
|
||
|
}
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
Remember to load the "faceapi" into a separate folder.
|
||
|
You will need to upload the face recognition models (`face_landmark`, `ssd_mobilenet`, and `face_descriptor`) from a server or local file system for `face-api.js` to work properly.
|