Although perhaps begrudgingly, many
artists have grown accustomed to
pointing and dragging when using
desktop design programs. But what
if it were possible to use the same
movements they’d use when drawing by
hand to create art in three dimensions?
That’s the premise behind the C Design
Lab, where Karthik Ramani and his team
have created zPots, which allows users to
create polished, colorful and 3D print-ready pottery in minutes.
“The recent success of tablets and depth
cameras is a direct example of the
importance of using natural interactions
to create simple and more interesting
virtual experiences,” says Ramani, the
Donald W. Feddersen Professor of
Mechanical Engineering and a professor
of electrical and computer engineering
(by courtesy). Leveraging such intuitive
technology, his team also has developed
sk Wiki, in which web users can
collaborate on hand-drawn sketches;
Pain of Touch
Paralysis can be difficult enough, but
coupling that with chronic neuropathic
pain can be agonizing.
“If you look at the quality of life of people
with spinal cord injury, most people say
that killing the pain is the No. 1 priority,”
says Riyi Shi, a professor of neuroscience
and biomedical engineering in Purdue’s
Department of Basic Medical Sciences,
College of Veterinary Medicine, and
Weldon School of Biomedical Engineering.
“In a chronic pain situation, a stimulus that
ordinarily might feel pleasant, even just a
little touch, could cause pain.”
Juxtapoze, which allows users to combine
pieces of clip art; and ChiRobot, in which
children can build toys and quickly
animate them using their hands.
Such technology can open up art to
people who don’t think of themselves
as creative, Ramani says: “Everybody
is a maker. By using more of these
natural interactions, we can create more
modalities of collaboration, and whatever
comes out of it is totally new.” | A.R.
Graduate students use hand gestures and motions on
a simple tabletop to input various commands into the
computer at Purdue’s C-Design lab directed by Professor
Ramani. Infrared depth sensors (in the white bar on the
table’s far side) measure the position of hands and fingers
in 3-D space. An image-processing algorithm tracks and
interprets hand gestures and motions over time.