Search...
Menu

FAQ

FAQ

Q1:Does the software support remote control, for example, to start or stop capture remotely? How is it implemented?

A1:Yes, it supports remote control. Through the remote control API in the SDK panel, you can remotely operate the software in real-time, including connecting devices, starting/stopping recording, switching post-processing modes, obtaining total data frames, etc. Alternatively, a sync box can be used to control capture via synchronization signals.

Q2:Are there fixed requirements for the number of cameras in a motion capture system?

A2: The number of cameras required depends on the specific experimental scene and the subjects being captured. Generally, larger capture volumes or a greater number of simultaneous subjects require a corresponding increase in the number of cameras.

Q3: How to obtain the SDKs for Matlab and Python?

A3: You can contact our engineers via the after-sales technical support chat group to request them. Each SDK comes with corresponding documentation. You can also download the SDKs from the Downloads page on our official website.

Q4:  What is the accuracy of the markerless motion capture technology mentioned on the website, and what are its current main applications?

A4: The accuracy of the markerless technology can reach the centimeter level. Currently, it is mainly applied to human motion capture.

Q5:  What is the relationship between the motion capture frame rate of system and the camera frame rate? For example, what is the highest achievable system frame rate when using eight 2H model cameras?

A5:Please refer to the specifications for details. The "FPS" in the product parameters refers to the maximum frequency at the camera's full resolution. The frame rate can be further increased by reducing the capture resolution. For example, the 26H model camera has been tested to support over 10,000 FPS, while the 2H model camera can reach 380 FPS at full resolution.

Q6:Does the system support capturing multiple rigid bodies simultaneously? Is there a limit?

A6:Yes, it supports simultaneous capture of multiple rigid bodies or humans. The currently released software version supports a maximum of 100 rigid bodies. Custom versions can be provided for special needs.

Q7:Can the joint angle data be directly exported from the XINGYING 29-point human model?

A7:Yes, export is supported. This 29-point model data is primarily used for gait analysis. In practical applications, the data is typically exported in the standard C3D file format, which can then be imported into professional gait analysis software like Visual3D or OpenSim for direct calculation and analysis of metrics like joint angles.

Q8:What is the effective testing range of the underwater motion capture system?

A8:The specific testing range needs to be determined based on the actual application scenario. We recommend you contact our business or technical support colleagues for a detailed assessment of your specific site. For underwater scenarios requiring large area coverage, a solution utilizing underwater active optical markers can be employed.

Q9:Does the optical motion capture system support simultaneous capture of active and passive optical markers?

A9:Yes, simultaneous capture is supported.

Q10:When creating a human model in the software, can the layout of the marker points be customized? How is it done?

A10:Yes, custom creation is supported. You can learn the detailed steps by referring to the chapter on creating custom templates in the official manual: https://xingying-docs.nokov.com/xingying/XINGYING4.4-CN/jiu-chuang-jian-markerset/san-zi-ding-yi-mu-ban/. If you need assistance during the operation, you can also contact our technical support engineers via the after-sales group.

Q11:When setting up a motion capture system for humanoid robot training using tripods in an outdoor scenario, is the calibration process convenient? How long does the entire process typically take?How is the software‘s IP address set?

A11:Using tripods for calibration outdoors is feasible. The total time required is directly related to the number of cameras and mainly involves two stages: First, the direction and aperture of each camera need to be adjusted individually. The time for this stage varies with the number of cameras. After adjustment, the formal calibration operation itself takes approximately 3-5 minutes. For outdoor environments with strong light, it is recommended to use AS-Type camera.

Q12:How is the software‘s IP address set?What might be the reason for poor recognition when creating a human body model?

A12:The software's IP address should be set according to the actual IP address of the computer running the software on the current network.

Q13:What might be the reason for poor recognition when creating a human body model?

A13:You usually need to adjust the camera height to ensure the full body is captured. Capturing a human typically requires 8-12 cameras. If the number is insufficient, data from the edges of limbs may be lost during movement.

Q14:Must the computer use an Ethernet cable? Can Wi-Fi be used?

A14:To ensure system stability, it is recommended to connect the motion capture system to the switch via Ethernet cables. If you only need to receive the already captured data, you can use a Gigabit router to connect via Wi-Fi.

Q15:Does the software support calibration and modeling of hands?

A15:Yes. Click "Create Human Body" in the software, select a hand marker template (e.g., the "Both Hands" template includes 24 markers per hand). After placing markers according to the template and positioning the hands in the center of the camera view, you can generate the hand skeleton models with one click.

Q16:How is high-quality motion capture data defined?

A16:Optical motion capture can achieve sub-millimeter accuracy. High-quality data is typically characterized by smooth overall data curves with minimal jitter. Such data often requires no additional processing and can be directly applied in fields like robot training.

Q17:Can data processing be done in post-processing?

A17:Yes, the post-processing module can be used to locate and fix abnormal data . For example, Cubic-Join can supplement a small number of missing points, or smoothing algorithms can be applied to the data.

Q18:How many cameras are needed to capture high-speed (approx. 10m/s) rigid body motion in a 7m*7m area?

A18:The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.

Q19:After importing human motion capture data files into a robot, how can fine adjustments be made?

A19:Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.

Q20:How many cameras are recommended for capturing a single humanoid robot?

A20:This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.

Q21:For humanoid robot teleoperation, is the 53-point V2 human model mandatory?

A21:The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.

Q22:How can data from motion capture software be imported into Matlab?

A22:1. The software has completed calibration, and the tested markers or rigid bodies can be observed normally in the 3D view. 2. In the "Data Broadcast" view of the software, select the server-side IP address and enable the SDK. 3. Use our Matlab SDK to obtain data in real-time in Matlab.

Q23:Why is rigid body data not being received in ROS?

A23:The following points need to be checked: 1. Whether a markerset has been established for the tested object in XINGYING, and whether the rigid body on the tested object is visible in the 3D view; 2. Confirm that VRPN is selected in the settings interface and that the correct server-side address is chosen; 3. On the ROS side, ping the server-side address selected in XINGYING to see if it can be reached.

Q24:How can I obtain the software for a trial before purchase?

A24:You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.

Q25:Have any academic papers used the NOKOV motion capture system equipment?

A25:A large number of academic papers have already used our equipment. You can search for the relevant papers on our official website.

Q26:How to choose a motion capture system based on specific needs?

A26:You can contact us by phone (010-64922321), email (info@nokov.cn), or by leaving a message. We will reach out to you at the earliest opportunity upon receiving your message and create a corresponding solution based on your needs.

Q27:How to become a distributor or agent?

A27:You can contact us by phone (010-64922321), email (info@nokov.cn), or by leaving a message. We will reach out to you at the earliest opportunity upon receiving your message and create a corresponding solution based on your needs.

Q28:How can data from motion capture software be imported into software like CATIA and DELMIA?

A28:Transmission via SDK.

Q29:In VR-related applications, why is there noticeable shaking of the controllers/glasses in the virtual scene?

A29:In the system settings, selecting "3D Smoothing," "Jitter Reduction," "Rigid Body Smoothing," and "IK Compensation" will result in improvements.

Q30:Can the system be used for underwater robot capture?

A30:Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.

Previous
FAQ
Last modified: 2026-04-01