CLC number: TP391
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 0000-00-00
Cited: 0
Clicked: 5341
Wei Bao-gang, Zhu Wen-hao, Yu Jin-hui. Reuse of clips in cartoon animation based on language instructions[J]. Journal of Zhejiang University Science A, 2006, 7(2): 123-129.
@article{title="Reuse of clips in cartoon animation based on language instructions",
author="Wei Bao-gang, Zhu Wen-hao, Yu Jin-hui",
journal="Journal of Zhejiang University Science A",
volume="7",
number="2",
pages="123-129",
year="2006",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.2006.A0123"
}
%0 Journal Article
%T Reuse of clips in cartoon animation based on language instructions
%A Wei Bao-gang
%A Zhu Wen-hao
%A Yu Jin-hui
%J Journal of Zhejiang University SCIENCE A
%V 7
%N 2
%P 123-129
%@ 1673-565X
%D 2006
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.2006.A0123
TY - JOUR
T1 - Reuse of clips in cartoon animation based on language instructions
A1 - Wei Bao-gang
A1 - Zhu Wen-hao
A1 - Yu Jin-hui
J0 - Journal of Zhejiang University Science A
VL - 7
IS - 2
SP - 123
EP - 129
%@ 1673-565X
Y1 - 2006
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.2006.A0123
Abstract: This paper describes a new framework for reusing hand-drawn cartoon clips based on language understanding approach. Our framework involves two stages: a preprocessing phase, in which a hand-drawn clip library with mixed architecture is constructed, and the on-line phase, in which the domain dependent language instructions parsing is carried out and clips in the clip library are matched by use of some matching values calculated from the information derived from instruction parsing. An important feature of our approach is its ability to preserve the artistic quality of clips in the produced cartoon animations.
[1] Bindiganavale, R., Schuler, W., Llbeck, J.M., Badler, N., Joshi, A.K, Palmer, M., 2000. Dynamically Altering Agent Behaviors Using Natural Language Instructions. Proc. the Fourth International Conference on Autonomous Agents, p.293-300.
[2] Benitez, A.B., Smith, J.R., Chang, S.F., 2000. MediaNet: A Multimedia Information Network for Knowledge Representation. Proc. Conference on IS&T/SPIE.
[3] Cassell, J., Cipolla, R., Pentland, A.(Eds.), 2000. A Framework for Gesture Generation and Interpretation, Computer Vision in Human-Machine Interaction. Cambridge University Press.
[4] Cassell, J., Vilhjalmsson, H., Bickmore, T., 2001. BEAT: the Behavior Expression Animation Toolkit. Proc. SIGGRAPH ’01, p.477-486.
[5] Earley, J., 1970. An efficient context-free parsing algorithm. Communications of the Association for Computing Machinery, 13(2):94-102.
[6] Fan, J.P., Elmagarmid, A.K., Zhu, X.Q., Aref, W.G., Wu, L., 2004. ClassView: hierarchical video shot classification, indexing, and accessing. IEEE Trans. on Multimedia, 6(1):70-86.
[7] Girard, M., 1987. Interactive design of 3-D computer animated legged animal motion. IEEE Computer Graphics and Applications, 7(6):39-51.
[8] Gotsman, C., Surazhsky, V., 2001. Guaranteed intersection-free polygon morphing. Computers and Graphics, 25(1):67-75.
[9] Noma, T., Kai, K., Nakamura, J., Okada, N., 1992. Translating from Natural Language Story to Computer Animation. Proc. SPICIS’92, p.475-480.
[10] Patterson, J.W., Willis, P.J., 1994. Computer assisted animation: 2D or not 2D? The Computer Journal, 37(10):829-839.
[11] Vilhjalmsson, H., Cassell, J., 1998. BodyChat: Autonomous Communicative Behaviors in Avatars. Proc. the 2nd Annual ACM International Conference on Autonomous Agents, p.269-276.
[12] Vosinakis, S., Panayiotopoulos, T., 2001. SimHuman: A Platform for Real-time Virtual Agents with Planning Capabilities. Proc. IVA ’01, p.210-223.
[13] Webber, B., 1998. Instructing Animated Agents: Viewing Language in Behavioral Terms. NCS 1374, Springer Verlag Berlin.
[14] Yu, J.H., Patterson, J.W., 1997. Assessment Criteria for 2D Shape Transformations in Animation. Proc. Computer Animation'97, p.103-112.
Open peer comments: Debate/Discuss/Question/Opinion
<1>