This short paper presents a novel approach to digitize scalp shape with a combination of a scalp probing rig and 3D head scanning.
Acquiring true scalp shape under hair, especially for females and other individuals with substantial hair, has been a challenging task for anthropologists, digital human modelers, and product designers. It is not always a viable option to recruit bald-headed subjects or require subjects to shave their heads. The most feasible way to capture scalp shape under the hair is through physical probing or digitizing. Using mechanical probes to obtain scalp shape under hair can be traced back to the US Army’s Personal Armor System for Ground Troops (PASGT) helmet project, which used a physical probing device to read the distance from the device’s spherical surface to the scalp surface. Recently, the US Air Force collected female scalp shapes using a FARO Arm digitizer.
A probing process that uses physical probes or a digitizer typically requires a subject to sit still for a considerable time, which is difficult and uncomfortable and became a greater challenge under Covid safety restrictions. To improve the efficiency and acceptability, we developed a scalp probing rig with 54 adjustable probes that can be worn and fit by the subjects. After fitting all probes so they lightly touch the subject’s scalp, a 3D head scanner was used to capture the image of the scalp probing rig in place. The final merged 3D image was imported to an in-house developed program to detect probes and calculate the coordinates of the probe tips. A scatter point set of the probe tips is then fed to a scalp shape reconstruction program (Morpheus-InfoSciTex) to recover the true scalp shape. This paper describes the design of the scalp shape rig and 3D scan processing methods to detect the probes’ coordinates.
Keywords: scalp shape, head shape modeling, 3D head scanner, head under hair
How to Cite:
Li, P. & Tashjian, A. & Hurley, M., (2022) “Digitizing human scalp shape through 3D scanning”, Proceedings of the 7th International Digital Human Modeling Symposium 7(1): 14, 3 pages. doi: https://doi.org/10.17077/dhm.31760
Rights: Copyright © 2022 the author(s)