2014년 5월 25일 일요일

Prototype




















Benchmark Task 1:

1) Make the robot know your body information including your height, weight , medical history, lifestyle, etc
(less than 20sec)

2)Send the robot the information on the calorie and nutrients from your meal.
(less than 30sec)

3)Look for the graph that represents your total intakes
(less than 20sec)

Benchmark Task 2:

1)Look for food recommendation and choose one of the dishes.
(less than 20sec)

2)Locate where to go to eat the food.
(less than 20sec)


Questionnaire:
Q1. Do you like the shape of the accessory?
(poor)1<->5(very good)

Q2. Is the main screen simple?
(complex)1<->5(simple)

Q3. How representative the icons looks for it’s function.
(not representative)1<->5(very representative)

Q4. Are you willing to recommend it to your friends?
(No)1<->5(Yes)

Q5. Is it intuitive to recognize your intakes and body condition?
(No)1<->5(Yes)

Q6. How friendly was the instruction?
(Unkind)1<->5(Friendly)


Paper Prototype: https://popapp.in/w#!/projects/5381806fa0a0a35056aeab32/preview

2014년 4월 27일 일요일

HappyMeal-"No stress diet, be healthy with a happy meal“



해피밀 컨셉무비 from 김병준 on Vimeo.

Funding goal: 1 million $

This is the diet-planning system. It targets people who are too busy to schedule the balanced menus. This robot collects user's intake and other body conditions. Then it recommend the menu case by case. The collected data include weather, season, disease condition and the information of nearby restaurants and markets where a user can find the recommended dishes. Users now can keep in shape and concentrate on their works better.




2014년 4월 5일 토요일

New System Concept Statement, Models for it, and Sketchs

 Concept statement for the interactive diet management system:

 The interactive diet management system targets people who worry about their weight, age, disease, or other special conditions, and who live alone or are too busy to manage their diet by themselves. Of course there are many guidelines on diet, but it’s not easy to keep following them and users still have difficulty in choosing menus of their taste and finding a way to eat them. The friendly diet-planning robot will solve this problem. It does check user’s diet whenever they eat, and collect user’s condition such as their body, disease, temporary sickness like hangover, as well as temporal and spatial data of season, weather and user’s location. Then, it provides users with personal and seasonable advises for next diet in an emotional way with emoticon, voice, and action. Besides, it locates markets and restaurants for recommended dishes. Users now can keep in shape so that they can concentrate on their works more.


Flow Model Diagram



Social Model





Sketch for a 'smartphone'

Summary&Critiques for the papers #3

Critical Design and Critical Theory: The Challenge of Designing for Provocation

Summary:

2014년 3월 30일 일요일

Very simple diagram


Benchmark cases: Dalseo-Gu subway station unmanned book rental service, Yonsei Uni. book rental service, Amazon, Itunes, Spotify

2014년 3월 29일 토요일

Thesis summary 2

<Wayfinding without Visual Cues:Evaluation of an Interactive Audio Map System-Esther Loeliger and Tony Stockman>

summary:

 This thesis tried to demonstrate that all users no matter they are blind can improve their way-finding performance with positional audio cues.
 This study is related to spatial knowledge and way-finding and auditory displays focused on ambient 3D auditory cues. Previous works reveal desirable properties of condensed instructions, topological relationships, being immersive, forcing to actively engage in the way-finding task and being in the context of game. Furthermore, they highlight that spatial auditory icons and callouts can aid spatial understanding. In this sense, the researchers conducted audio map system with several interactive function and many techniques to reduce confusing of sound. Then they evaluated the efficacy through conducting the game of collecting coins along the way with 4 levels. By using multiple one-way ANOVAs, the way-finding index (experience of navigating) and sonification index (impact of the auditory cues) showed the efficacy. Besides, the visual trace of participants also showed the same result.
 I often like to think in the way converting something visual to auditory one or vice versa. I know the potential of this method, so I'm optimistic about this attempt to replace vision with sound.