מאור גרינברג, הרצאה סמינריונית למגיסטר
יום ראשון, 8.12.2013, 13:00
Three dimensional user interfaces (3DUI) have great potential in computer aided geometric design (CAGD), where users work in a virtual 3D space and perform 3D operations. The mouse, commonly used for such tasks, provides only 2D input. In order to be used within the virtual space, the input is transformed to the virtual 3D space by reverse projection. However, this transformation is ambiguous, and cannot be used unless it is projected onto an object or constrained in some other way.
This work demonstrates a comprehensive system that combines a dual-handed 3DUI, using input from the Kinect sensor, with the functionality of a CAGD system for general modeling. This entails that the system must handle multiple objects within the modeling space, and support object selection, object transformation and navigation within the virtual 3D space. Additionally, It must provide various geometric modeling functions, including creation of different types of curves, surfaces and solids, and their manipulation, such as Boolean operations and deformations. Furthermore, it should enable precise operations.
A small set of postures and gestures controls the proposed 3DUI, with a consistent behavior for all modeling functions, by using direct 3D geometric input for the modeling functions. The geometric parameters of the modeling functions are determined by the position of the hands at a certain posture.
In order to allow precise input, the user can apply snapping constraints at any time. The constraints affect the precision of the operations and transformations and possibly restrict the input to a lower degree of freedom.
The access to the modeling functions is done via a graphical menu. To avoid clutter, only modeling functions that are relevant in the context of the current selected object are visible. Extending the functionality of the system by adding further modeling functions can be done with ease by assigning or modifying their parameters according to the input positions, user actions, and applied constraints. The system also allows for adaptation of other input devices or posture recognition algorithms.