|
I'm interested in human-computer interaction in general, and more specifically in how novel technologies can best be leveraged to help people use technology in a way that is not only efficient, but also natural. In the past, I have been involved in the design and evaluation of multimodal interfaces for PCs, and in particular interfaces incorporating the use of voice and/or a stylus into the more traditional mouse and keyboard environment. I recently started to move into the area of design for mobile devices and have become increasing interested in pervasive computing.
Effective software UI design evolves from finding the right balance between matching user needs and applying the appropriate technologies to meet those needs.
The roles of requirements gathering/analysis, UI design, experience design, development, and evaluation are all interconnected and non-linear, constantly feeding into one another and informing application design throughout the development lifecycle. Clear and timely communication between all those involved in these roles is essential to creating a successful product.
In recent years I've had the opportunity to work on several different stages of the application design lifecycle, during which I developed experience in the following areas:
     Requirements gathering: questionnaires, observation, interviews, data analysis/synthesis
     Design specification: sketching, storyboards, low-fidelity prototypes
     Implementation: Java
     Evaluation: evaluation planning, scenario and protocol development, running experiments, questionnaires, observation, interviews, data analysis and interpretation
Please contact me directly if you are interested in seeing my design portfolio.
Brief descriptions of some of the projects that I've worked on:
Interactive Multimodal Information Management (IM2)
|
Parmenides
|
MedSLT
|
Regulus
|
Minimizing Modality Bias When Exploring Input Preference for Multimodal Systems in New Domains: the Archivus Case Study. Lisowska, A., Betrancourt, M., Armstrong, S. and M. Rajman. In the proceedings of CHI'07. San José, California, April 28th-May 3rd, 2007.
Multimodal Input for Meeting Browsing and Retrieval Interfaces: Preliminary Findings. Lisowska, A. and S. Armstrong. In the 3rd International Workshop on Machine Learning for Multimodal Interaction (MLMI'06). Bethesda, MD, USA, May 1st-4th, 2006. Springer-Verlag Lecture Notes in Computer Science vol. 4299, 2006. Renals, S., Bengio, S. and J. Fiskus. (eds)
ARCHIVUS: A System for Accessing the Content of Recorded Multimodal Meetings. Lisowska, A., Rajman, M., and T.H. Bui. In the First International Workshop on Machine Learning for Multimodal Interaction (MLMI’04). Martigny, Switzerland, June 21st-23rd, 2004. Springer-Verlag Lecture Notes in Computer Science vol. 3361, 2005. Bengio, S. and H. Bourlard (eds.)
User Query Analysis for the Specification and Evaluation of a Dialogue Processing and Retrieval System. Lisowska, A., Armstrong, S. and A. Popescu-Belis. In the proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004). Lisbon, Portugal, May 26th-28th, 2004.
A complete list of publications can be found here.
Academic Background
|
Teaching I also work as a teaching assistant at the School of Translation and Interpretation (ETI) at the University of Geneva and have been involved in the following courses:
|
|
In addition to working in academia, I've also worked in industry - Abbreviated CV. |
I'm usually taking photographs (some of my photos are below), traveling, writing, cooking and weather permitting, alpine skiing, rollerblading or hiking, among other things.
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |