|mentor:||Alexandru Eugen Ichim|
Hello World! I have just finished my bachelor studies in Computer Science at Jacobs University Bremen.
In this phase, the focus was to integrate the pcl::KinfuTracker class and the OpenCV libraries in order to provide better input clouds for the Rigid/Non-Rigid Registration methods and thus to obtain better results.
By using the pcl::KinfuTracker class, it was possible to obtain an almost full scan of the heads of the subjects like the ones presented below:
Using the pcl::KinfuTracker had the disadvantage that unwanted objects were also scanned during the procedure, however by using the cv::CascadeClassifier, the program was able to pin-point the position of the head and move the statisitcal model to a favorable position so that the Rigid Registration method could fully align the model. ( The sphere represents the center of the face)
Obtaining better input point-clouds, allowed to better analyze the efficiency of these types of registrations. As stated in the previous post, the accuracy of the result depends on the regularizing weight of the Non-Rigid Registration, however there is one more parameter to take into account. The training set of face meshes was registered on a ten times smaller scale than the PCL point clouds are stored in. Intuitively, this means that when the program reads the database it should register the values as ten times smaller, however the subjects used for testing this program did not have heads of exactly the same size.
Below, you have an example of what happens when the scale is not set right:
This result was obtained because, as it is presented in the folowing picture, the chin of the model was matched with the neck of the target, even though the rest of the face seems to be in position:
Once the scale was properly established the correct result was obtained:
- A method should be implemented so that it would not be necessary to set the right scale, no matter what kind of person is being scanned (child/adult)
- The pcl::KinfuTracker class has not been officially released, thus further maintenance of the prgram is required
- The Registration methods presented so far have to be compared to other methods like the ones already implemented in PCL
In the previous phase, it was presented how to obtain a statistical model from a set of face-meshes. The next step in our project is to “match” the mean face of the database, with the face of a random person, like the one in the picture below:
The matching is done by applying alternatively the following methods.
This method is very similar to the Iterative Closest Point Cloud algorithm, because the goal is to estimate a rotation matrix and a translation vector that would move the average face to an optimal position, near the face of the kinect. Basically, it is required to minimize the error and this is done by calculating the solution of this system in the least square sense. In order to calculate this solution, the system is first linearized using the Jacobian matrix.
Of course this process is applied iteratively, and below are presented a few stages of positioning of the model over the scan:
Once the model is roughly aligned, we need to modify the shape of the model to match the face from the scan. For this we make use of the eigenvectors computed in the previous phase and we calculate the optimal solution of this system: , where is the matrix of eigenvectors, is the current form of the model and is the vector of basis coefficients that need to be determined.
However, there is on more constraint to be applied and that is to minimize the sum , where is the eigenvalue of the corresponding eigenvector. Therefore, to the Jacobian matrix of this system, we need to add a diagonal matrix with on the diagonal and multiplied by a certain weight.
The purpose of this regualrization is to determine to what degree the face should be deformed. The eigenvectors are stored in the matrix in decreasing order according to their eigenvalues and their position in this sorting order determines whether they have a greater or a smaller influence on the shaping of the model. When the model is mostly overlapping with the face in the scan, more information can be drawn about the final figure, hence the weight specified above should be smaller . On the other hand, if the model is not yet aligned with the scan, the deforming should be smaller and thus the weight should be bigger. Below you can see how the model looks for several values of the weight:
Notice that the shaping process tends to return the same effect if the weight of the regularizing constraint exceeds a certain value.
As mentioned above, these functions are applied alternatively for a few number of times, and the following results were obtained:
The above picture was obtained after one iteration and the following one after 10:
Also, below you can observe the precision of this method, the black figure representing the final version of the model and the green one representing the point cloud of the face:
The goal of this project is to implement a program that will modify the expressions of several scanned faces according to the facial expressions captured by a RGBD camera.
The first step is to create a statistical model based on a training database of faces. The training set used so far was the one provided by the FaceWarehouse project and it consisted of 3D meshes stored in .obj files. For further information, please consult the following link: http://gaps-zju.org/facewarehouse/
For each face in the training set, a column vector S was created and it contained the coordinates for every vertice of the mesh. Afterwards, the avearage vector and the covariance matrix were calculated. Normally, the covariance matrix would be calculated as , however one should note that this matrix is 34530 by 34530 and in order to compute the statistical model, the most significant eigenvectors are required. To speed up the calculations, a matrix was formed by joining the vectors and the eigenvectors for were calculated. It is important to note that the size of is determined by the number of faces and that the eigenvectors of the covariance matrix can be obtained by left multiplying to the eigenvectors of . Once the eigenvectors are calculated, the statistical model is obtained according to the formula: , where is the weight of an eigenvector, determined by multiplying a random number in the range [-2,2] with the corresponding eigenvalue. The final results of this phase are presented below. The average face is:
And the model is:
As you can see, the model obtained is a bit flattened compared to the mean face, that is because in the training set the majority of the faces are a bit rounded, however this project needs a model to take into consideration several types of faces, and this is why we need to consider the covariance of the samples in the database.
- For this model, only the vertices of the faces were used, however the texture coordinates also need to be taken into consideration. Unfortunately, the database does not provide any information about the colors as of yet. Once the data is available the model needs to be adapted for this feature
- Once the statistical model is fully configured, a 3D registration algorithm must be applied to project the facial expression of a testing sample to the model.
T. Vetter and V. Blanz, A Morphable Model For The Synthesis Of 3D Faces, Max-Planck-Institut, Tubingen, Germany