Sketch To-Do List
- Bugs
- Duct normals aren't always right for ray intersection
(they get inverted sometimes)
- reshaping of extrusions, walls, and revolves doesn't quite work in
the general case
- Better/more complete geometry creation
- Drawing last sketch line should interactively "rubber-band" new object
- Put revolves back in
- hook into Softimage/Alias/Alpha1
- Geometric editing
- add interactive scaling/reshaping?
- extrude cylinders and extrusions along curved paths
- implement Branco's stuff where you can draw on an objects' surface.
- then pull the surface in or out
- allow all surface points to be flattened to a plane
- Surface property editing
- Texture map editing (texture coordinates -- texture itself?)
- bump map drawing
- dragging and dropping or speaking color/textures
- Inferencing
- extend straight line edges to meet hidden surfaces
- join surfaces which have nearly planar surfaces (snapping, etc)
- Sequin's object manipulations -- dynamically determine groups
- bring back tab key to cycle through different options
- Manipulation
- add collision detection to make it easier to move/push objects around
- put back in contact constraints (see collision detection)
- draw other kinds of constraints? -- bounded translation constraints
- manipulating linkages -- see inverse kinematics & bounded transformations
- Motion
- get inverse kinematics algorithm & determine gestural interface
- physics
- sketch motion paths and synchronization points
- demonstrate motion for objects to mimic
- Gestures
- strokes
- improve so that you don't have to punctuate each stroke with a click
- back & forth should average into a line.
- start a line segment, then go back and add to it before inferencing
- allow for more complex gestures --> line drawings (use chl's stuff)
- Non-photorealistic rendering
- curvy lines
- thickness of lines to denote boundary or self-occlusion
- shading info -- bunch of lines which move together as camera changes
- inter-frame coherence
- amortize cost of generating segments
- tessellation & transparency to render partial objects
- Migration to conventional modeling paradigms
- Need to be able to draw from standard orthographic views.
i.e., an extrusion's profile is most accurately drawn head-on.
But the extrusion axis is best drawn perpendicular to head-on.
Perhaps we want a way to draw a profile and have it be instantiated
at some default thickness. Then from another viewpoint we can
sketch it's extrusion. Or do it concurrently from a 2 view editor?
- need to be able to read/write out VRML & other standard formats
Many bugs of many species. Mostly my problem to fix though.
This project involves designing interfaces for a variety of complex
surfaces that can be created very easily from simple curves or lines.
In addition, this involves improving and extending the existing interfaces
for creating geometry.
Objects of revolution ("revolves") are defined by a 2d planar curve which
is swept around an axis in 3d.
The resulting shape is symmetric around the rotation axis. However
there is a common variation of a revolve objects in which more than one
2d planar curve is specified at different positions around the 3d
axis.
This causes the revolve object to blend between each 2d curve
as it sweeps the surface around the axis of revolution.
Another type of surface is defined by specifying two or more curves
in 3d. Then a surface is "skinned" across these curves. Other possible
surfaces are those generated by implicit surfaces and subdivision surfaces.
In addition to designing interfaces for the above surfaces, we also need
to implement the code that actually creates the surface polygons. The two
alternatives available are to write the code ourselves, or to use
existing geometric modelers (such as Softimage /Alias /Alpha_1 ) as
"subroutine" libraries. The work involved here is to first get their
code and documentation installed at Brown, and then to either code software
interfaces between their systems and our system, or to write a complete
interface on top of their systems.
Finally, there are a variety of ways to improve the current techniques
for creating geometry.
- The last line of most gestures is an extrusion axis. Thus as the
user draws this line, the surface could be interactively extruded.
- support for freezing/unfreezing CSG's needs to be added
- support for (un)grouping objects
- a "tabstop" metaphor could be setup for cycling through various
alternative interpretations of a gesture, especially interpretations
about where (depth-wise) a piece of geometry should be placed in
the scene.
Interactive scaling and reshaping
I've currently implemented a couple of techniques for resizing
and scaling objects. However, there a number of extensions including
generalizing the editing operations to work from any camera view
and on any features of an object. It would also be nice to extend
these operations to allow for more interactive manipulation of surfaces
instead of the "draw edit lines, see curve change, draw edit lines ..."
approach I've taken so far.
Extrusions
I currently support the extrusion of a profile only only a straight
axis-aligned line. It would be interesting to try to support extrusions
along arbitrary curved paths. Tackling general 3d curves would also be
interesting -- either multiple drawings or a bunch of assumptions/gestures
would be required to support this. For example, you might try to decompose
a curve into segments and then try to figure which axis each segment
most closely corresponded to.
Using Branco's stuff
I have a paper from a guy in Portugal who has done some neat things with
sketching. One of his techniques is to draw directly on the surface of
objects -- this isn't entirely trivial. But after lines have been "snapped"
to an objects surface, the object can be re-tessellated so that the drawn
region on the surface could be manipulated -- extruded, flattened, twisted,etc.
Texture map editing
Current interfaces for texture mapping seem tedious. I'm not aware
of any that are really simple -- however, there are a lot of interaces
out there that I know very little about as well. In any case, I think
it would really nice to have an interface that let me quickly determine
a texture to map either using :
- voice input to say "teak", "shag carpet",
etc,
- or to write the texture name and have it recognize my hand writing
- or to allow me to sketch approximately what the texture looks like
and have it find a sampling of textures that match my drawing
- or coming up with a nice drag and drop metaphor for grabbing surface
properties
- or last and least to just have a bunch of textures for me to browse through).
Then the interesting part would be to come up with a nice gestural approach
to texturing a surface. This might include:
- drawing a region on the surface of an object to restrict the texture.
- drawing manipulation widgets to allow the texture to be
rotated, translated and scaled on the objects surface
- warping the image by fixing certain points and then stretching the
surface
- in addition, there might be a clever way to interpret input (shading strokes?) from the user as bump/displacement maps.
There are a variety of aspects to conventional line drawings which might
be exploited in order to better reconstruct 3d objects from 2d drawings.
For example, in the previous version of sketch, I try to identify T
intersections -- where an edge of a gesture ends along an outline edge
of another surface. When such a situation is found, I interpret that
to mean that the gesture edge is being occluded by the outline edge surface,
so I extend the gesture edge until it meets an occluded surface. This
seems to correspond well with the way people perceive line drawnings and
makes some operations much simpler.
Other places to look to improve inferening are when two surfaces meet.
Often, but not always, these surfaces should be "joined". It would be nice
to find an automatic inferencing mechanism that can get this situation right
most of the time, but a mechanism that allows users to do it manually would
be good as well.
Perhaps a more important area for trying to extend inferencing mechansims
is in the context of object manipulations. Currently I generate groupings
automatically when an object is created on "top" of another object. However,
there are probably better techniques for determining grouping relationships
that would handle more dynamic scenes -- see Bukowski & Sequin's paper in
the 3D Symposium from last year.
Another aproach to inferencing is to try to generate the most likely
inferences and then let the user sort among them for the one that they
want. This sorting could either be done by "tabbing" from one interpretation
to the next, or if there are a lot of possibilities, by providing some
additional input.
Currently, SKETCH environments allow for minimal high level manipulation.
Simple translation and rotation constraints can be defined to allow for
kinematic manipulation of simple joint structures. However, it would be
nice to find good ways of both specifying and utilizing higher-level
manipulation techniques including:
- collision detection -- makes it easier to move/push objects around?
- surface constraints -- makes it easier to orient objects on top of each other.
- bounded constraints -- sliders with limits, elbows that dont rotate a full 360degrees.
- inverse kinematics -- pull on an end-effector and see all in-between joints move. Also provide for weightings and limits on different joints
SKETCH currently only describes static scenes that can be edited.
There is enormous potential for using SKETCH-like interfaces to describe
animations and dynamic scenes. These scenes would be useful for
storyboarding animations or generating dynamic and reactive illustrations.
Some possible starting points would be :
- get inverse kinematics algorithm & determine gestural interface
- physics
- sketch motion paths and synchronization points
- demonstrate motion for objects to mimic
- describe "situations" or active components which trigger objects to start animating -- good for buttons that turn on lights, or start processes
Gestures in SKETCH are particularly easy to implement and recognize because
of their discrete nature. Nearly every gesture in SKETCH is composed of
strokes that are delimited by a button press/release combination. Therefore,
gesture recognition is as simple as a YACC grammar. However, more fluid
gestures would be of great benefit in many situations -- especially since
they should allow the user to gesture faster. Thus it would be nice to
have someone investigate the problem of using less discrete gestures that,
for example, might be more appropriate for a tablet input devices.
Some examples of gestures that I'd like to see are:
- drawing back and forth without releasing the mouse button to make it
easier to specify a free-hand line since all of the back and forth
motion can be averaged together to help overcome the "noise" in the
sketching process.
- decreasing the "ordered-ness" of sketch gestures would probably make
it easier for users. Currently for example, once the first edge of
a cube gesture is drawn, it can't be changed at all -- the resulting
cube can be edited, but the initial gesture strokes can't. So if
you imagine drawing one cube edge, then drawing a second cube edge,
then going back and editing the first one ...
- There are a number of techniques which attempt to recognize entire
line drawings with minimal user input. I've had a master's student
implement such an algorithm for me, and I'd like to see both how well
it works within the SKETCH environment and how it can be extended to
handle more complex surfaces.
Currently SKETCH uses only a single technique for non-photorealistic
rendering in real time. There are a number of interesting techniques that
run off-line. Lee (lem) and I have a bunch of ideas for how to generate
better non-photorealistic rendering. These ideas can be broken down into
two components:
- Making interesting looking "sketch" marks
- Placing interesting "sketch" marks to depict an image
The first of those problems can be thought of as a bottom up approach
in which we want to create the technology for making marks on a screen
look less "computer-drawn" and more "hand-drawn". Possible implementation
schemes include :
- making curved lines that mimic the dynamics of hand-drawn lines.
This could be done by jittering spline control points or drawing
from a library of existing hand-drawn curved lines.
- lines have more of the character of a magic marker, or pencil by
varying the thickness and intensity of the line. Thickness variation
can be used to model the types of marks people generally make with
a pencil, or they can be used the way illustrators use them to
differentiate, for example, between a boundary edge of an object and
an internal self-occlusion edge of an object.
- one way to do this would be to construct a triangular mesh
which fits the shape of the stroke. the mesh could be
associated with a texture map to give it a desired look, such
as black ink or charcoal. the connectivity of the mesh could
be standard (strokes of approximately the same length would
have the same connectivity), with the position of the vertices
determined by the curve's shape. -- lem
- generating "washes" in which a surface is painted very innacurately
so that it looks like quick brush strokes that may go outside the
polygonal boundary of the object, or may not quite reach the
boundary
The latter problem of placing line strokes can be used to simply outline
objects, or to additionally shade them. Again, possible projects are:
- generate bunches of lines which act like "hash-mark" shading in
pen and ink illustration.
- determine where to put shade marks by exploiting inter-frame
coherence to both amortize the cost of computing shading and to
decrease the visual jitter as shade lines are "moved".
- the above strategy could work for placing silhouette lines on
curved surfaces as well. once a silhouette line is found, it
would typically only need minor adjustment (using a local
calculation), to reposition it for the next frame.
occasionally (not every frame) the renderer could check
whether any new silhouette lines need to be created or have
been affected by occlusion. -- lem
- it might be interesting to have strokes attached to certain
features of a free-form surface (the project lem is working
on). E.g., a crease occuring on a human figure could have a
stroke "attached" to it. the stroke would modify its
appearance (or be invisible) according to the current view.
the point is that we don't need to find the stroke from
scratch each frame, since we can generally expect to need a
stroke corresponding to that part of the surface. this
technique could be used in a sparse rendering style which
focuses on the most important features -- which is what a lot
of drawings do. -- lem
- tessellate objects and use transparency in order to fade objects
away -- this is an important part of pencil and paper sketches since
it allows only the important parts of objects to be drawn. The rest
of the object fades away and doesn't distract from the important
features.
There are a number of techniques in conventional modeling systems which
probaly should be kept. There are others which should be changed.
Determining how the new and the old can coexist is an essential problem
for makings SKETCH usable in the large. For example, SKETCH uses only
a single orthographic view. Nearly all modeling systems, however, rely
on 3 orthographic and one perspective view. Perhaps SKETCH should be
extended to see how it can be used in a 3 view system. In addition,
many people feel that it's essential for SKETCH models to be able to be
written out in a format that conventional modelers can read. That way,
after making an initial "sketch" of a model, the model can be refined
in a more conventionaly system.
Last modified: May 31, 1996
Bob Zeleznik
bcz@cs.brown.edu