Sketch To-Do List

Comments and Suggestions

  1. Bugs
  2. Better/more complete geometry creation
  3. Geometric editing
  4. Surface property editing
  5. Inferencing
  6. Manipulation
  7. Motion
  8. Gestures
  9. Non-photorealistic rendering
  10. Migration to conventional modeling paradigms

Bugs

Many bugs of many species. Mostly my problem to fix though.


Adding More Complex Geometry

This project involves designing interfaces for a variety of complex surfaces that can be created very easily from simple curves or lines. In addition, this involves improving and extending the existing interfaces for creating geometry. Objects of revolution ("revolves") are defined by a 2d planar curve which is swept around an axis in 3d. The resulting shape is symmetric around the rotation axis. However there is a common variation of a revolve objects in which more than one 2d planar curve is specified at different positions around the 3d axis. This causes the revolve object to blend between each 2d curve as it sweeps the surface around the axis of revolution.

Another type of surface is defined by specifying two or more curves in 3d. Then a surface is "skinned" across these curves. Other possible surfaces are those generated by implicit surfaces and subdivision surfaces.

In addition to designing interfaces for the above surfaces, we also need to implement the code that actually creates the surface polygons. The two alternatives available are to write the code ourselves, or to use existing geometric modelers (such as Softimage /Alias /Alpha_1 ) as "subroutine" libraries. The work involved here is to first get their code and documentation installed at Brown, and then to either code software interfaces between their systems and our system, or to write a complete interface on top of their systems.

Finally, there are a variety of ways to improve the current techniques for creating geometry.


Geometric Editing

Interactive scaling and reshaping

I've currently implemented a couple of techniques for resizing and scaling objects. However, there a number of extensions including generalizing the editing operations to work from any camera view and on any features of an object. It would also be nice to extend these operations to allow for more interactive manipulation of surfaces instead of the "draw edit lines, see curve change, draw edit lines ..." approach I've taken so far.

Extrusions

I currently support the extrusion of a profile only only a straight axis-aligned line. It would be interesting to try to support extrusions along arbitrary curved paths. Tackling general 3d curves would also be interesting -- either multiple drawings or a bunch of assumptions/gestures would be required to support this. For example, you might try to decompose a curve into segments and then try to figure which axis each segment most closely corresponded to.

Using Branco's stuff

I have a paper from a guy in Portugal who has done some neat things with sketching. One of his techniques is to draw directly on the surface of objects -- this isn't entirely trivial. But after lines have been "snapped" to an objects surface, the object can be re-tessellated so that the drawn region on the surface could be manipulated -- extruded, flattened, twisted,etc.

Surface Property Editing

Texture map editing

Current interfaces for texture mapping seem tedious. I'm not aware of any that are really simple -- however, there are a lot of interaces out there that I know very little about as well. In any case, I think it would really nice to have an interface that let me quickly determine a texture to map either using :

Then the interesting part would be to come up with a nice gestural approach to texturing a surface. This might include:


Inferencing

There are a variety of aspects to conventional line drawings which might be exploited in order to better reconstruct 3d objects from 2d drawings. For example, in the previous version of sketch, I try to identify T intersections -- where an edge of a gesture ends along an outline edge of another surface. When such a situation is found, I interpret that to mean that the gesture edge is being occluded by the outline edge surface, so I extend the gesture edge until it meets an occluded surface. This seems to correspond well with the way people perceive line drawnings and makes some operations much simpler.

Other places to look to improve inferening are when two surfaces meet. Often, but not always, these surfaces should be "joined". It would be nice to find an automatic inferencing mechanism that can get this situation right most of the time, but a mechanism that allows users to do it manually would be good as well.

Perhaps a more important area for trying to extend inferencing mechansims is in the context of object manipulations. Currently I generate groupings automatically when an object is created on "top" of another object. However, there are probably better techniques for determining grouping relationships that would handle more dynamic scenes -- see Bukowski & Sequin's paper in the 3D Symposium from last year.

Another aproach to inferencing is to try to generate the most likely inferences and then let the user sort among them for the one that they want. This sorting could either be done by "tabbing" from one interpretation to the next, or if there are a lot of possibilities, by providing some additional input.


Manipulation

Currently, SKETCH environments allow for minimal high level manipulation. Simple translation and rotation constraints can be defined to allow for kinematic manipulation of simple joint structures. However, it would be nice to find good ways of both specifying and utilizing higher-level manipulation techniques including:


Motion

SKETCH currently only describes static scenes that can be edited. There is enormous potential for using SKETCH-like interfaces to describe animations and dynamic scenes. These scenes would be useful for storyboarding animations or generating dynamic and reactive illustrations. Some possible starting points would be :


Gestures

Gestures in SKETCH are particularly easy to implement and recognize because of their discrete nature. Nearly every gesture in SKETCH is composed of strokes that are delimited by a button press/release combination. Therefore, gesture recognition is as simple as a YACC grammar. However, more fluid gestures would be of great benefit in many situations -- especially since they should allow the user to gesture faster. Thus it would be nice to have someone investigate the problem of using less discrete gestures that, for example, might be more appropriate for a tablet input devices. Some examples of gestures that I'd like to see are:


Non-photorealistic Rendering

Currently SKETCH uses only a single technique for non-photorealistic rendering in real time. There are a number of interesting techniques that run off-line. Lee (lem) and I have a bunch of ideas for how to generate better non-photorealistic rendering. These ideas can be broken down into two components:

The first of those problems can be thought of as a bottom up approach in which we want to create the technology for making marks on a screen look less "computer-drawn" and more "hand-drawn". Possible implementation schemes include : The latter problem of placing line strokes can be used to simply outline objects, or to additionally shade them. Again, possible projects are:


Migration to Conventional Modeling Paradigms

There are a number of techniques in conventional modeling systems which probaly should be kept. There are others which should be changed. Determining how the new and the old can coexist is an essential problem for makings SKETCH usable in the large. For example, SKETCH uses only a single orthographic view. Nearly all modeling systems, however, rely on 3 orthographic and one perspective view. Perhaps SKETCH should be extended to see how it can be used in a 3 view system. In addition, many people feel that it's essential for SKETCH models to be able to be written out in a format that conventional modelers can read. That way, after making an initial "sketch" of a model, the model can be refined in a more conventionaly system.


Last modified: May 31, 1996
Bob Zeleznik
bcz@cs.brown.edu