下午,被港仔丟棄的我,一個人閒晃在中環/蘭桂坊附近。結志街的傳統市場裡,活跳的鮮魚、菜販肉販前選購的主婦、來來往往的手推車,讓我想到台北的東門市場 - 喧鬧、忙亂,如出一轍。 而突然間讓我眼睛一亮的,是坐落在這個喧鬧不已的市場裡,安靜低調的一個白色小門。旁邊的玻璃窗上,貼有一張A4大小的紙,寫著法國的廚師精心製作鵝肝醬的介紹。我好奇地往裡頭看,只 ... More 下午,被港仔丟棄的我,一個人閒晃在中環/蘭桂坊附近。結志街的傳統市場裡,活跳的鮮魚、菜販肉販前選購的主婦、來來往往的手推車,讓我想到台北的東門市場 - 喧鬧、忙亂,如出一轍。 而突然間讓我眼睛一亮的,是坐落在這個喧鬧不已的市場裡,安靜低調的一個白色小門。旁邊的玻璃窗上,貼有一張A4大小的紙,寫著法國的廚師精心製作鵝肝醬的介紹。我好奇地往裡頭看,只見深褐色的老舊木桌,上面擺了一個樣式簡單的玻璃瓶子,和兩個水杯。我推進門,店裡的高雅卻又親切的氣質,讓我不自覺地把音量壓得很小聲,小到我甚至不確定對方聽不聽得到。經過詢問,原本只想喝杯咖啡的我,才發現原來他們只做午晚餐,沒有下午茶。但沒想到的是,老闆卻很親切地、也同樣很小聲地,用他的廣東口音跟我說,"沒關係,你進來,我請你喝杯咖啡。"真的可以嗎?我問。也許是因為覺得有緣吧,他很熱情地就給了我一個窗邊的位子。或是因為位子真的比較少,之後陸續也有人進來問一樣的問題,也沒見老闆請他們進來坐。 店裡不論一桌一椅,都有說不出的氣質。矮冰櫃裡擺著各式的鵝肝醬,跟記憶裡歐洲市集裡的小店一樣,透露著一股新鮮、樸實;一旁的古董相機,也讓我想起在英國舊市場裡的行家交易,品味敏銳出眾。我一邊坐下一邊問老闆,為什麼要把這麼有氣質的店開在這裡。只見他毫不猶豫地笑著回答: "市場裡嘛~這是法國的傳統。" 開西餐廳的人很多。但是能碰到真正還原歐洲的品味和文化精髓的店,卻是難得。為人客氣的他,一聽說我想喝cappucino,就笑著謙虛地說,"機器不好,我試一試。"叫我很不好意思。我忍不住開始跟老闆聊起天來,聽說了他在法國的十年學攝影、開gallery的經歷;也才知道他是因為孩子出生了,希望他在自己的家鄉成長,才回到香港來。我一邊聽,一邊看著這扇門隔個門裡門外兩個世界,想著在這個腳步飛快的海港金融都市裡,能夠巧遇這樣的人、和他一磚一瓦拼湊出的世外桃源。我發現自己運氣真好。 稍有遺憾的是,這家經營了四年的法國餐廳並沒有網頁,老闆的笑容裡也似乎透露出不希望大張旗鼓的低調。我說想在blog上分享給大家,"這下可能只能留地址了",他才告訴我明年香港政府要重建結志街這整個舊neighborhood,所以他們到時候應該也會搬家,我如果再來也許只得靠緣分。但不管怎樣,我還是把店的info留在下面,希望有緣人有機會碰到熱情的老闆,文化rich到滿出來的店,和他濃得讓人帶著微笑離去的cappucino。 店名:Le Monde d'Ulysse 地址:香港中環結志街9號地下(一樓) 電話:(+856) 2526 2621 Less |
昨天聽了在Media Lab一年以來聽到最精采的演講 講者是來自日本的Toshio Iwai (岩井俊雄) 一進演講廳的印象 就是一陣訝異 "這未免也太多人了吧" 他的作品是這次整個siggraph展場最備受矚目的焦點 是一個塑膠的玩具房子 在特殊的光照處理下會以各種任意方式扭曲 帶給人的感受是 電腦製作的3D動畫 在實體世界中被呈現 ... More 昨天聽了在Media Lab一年以來聽到最精采的演講 講者是來自日本的Toshio Iwai (岩井俊雄) 一進演講廳的印象 就是一陣訝異 "這未免也太多人了吧" 他的作品是這次整個siggraph展場最備受矚目的焦點 是一個塑膠的玩具房子 在特殊的光照處理下會以各種任意方式扭曲 帶給人的感受是 電腦製作的3D動畫 在實體世界中被呈現 演講從一句話看似可笑 卻影響深遠的話揭開序幕 "有一天 我媽媽告訴我說 從今天開始不能在買玩具了 我得要自己做玩具" 從這麼簡單的一句話 Toshio道出他脫離不了 創作 的人生 作業簿 抽屜 小紙片 電池 燈泡... 手邊所有拿得到的材料 都變成他做玩具的工具 或者 實際上 這些材料本身就是他的玩具 製作一般所謂的玩具 反倒變成了他遊戲的內容 一頁頁成長的過程中 材料漸漸地變得越來越複雜 作品也不斷成熟 甚至讓人眼睛為之一亮 不知不覺地 他已經變成了一個優秀的media artist 他花了好久好長的時間 講了自己的人生 也講了相關設計領域的歷史 接著所show出來源源不絕的作品 都是interactive technology領域裡頭 具有領先地位的東西 我想 坐在下面的Media Lab教授們 不曉得有多少人是為之汗顏的 因為在我眼裡 很多現在在這裡有的東西 多年以前 其實已經可以在Toshio的作品裡頭看出相似的概念了 然而 演講進行到三分之二為止 都沒有讓我真正感到驚訝的東西 直到他用著和yamaha合作所製作出最新的數位樂器 Tenori-on 現場表演即興創作 我不會說 media lab已經完全拿不出任何一樣樂器可以跟Tenori-on相比 這麼滿的話 但是我相信 在場的每一個media lab member 都必須承認 我們得看看別人 好好想想自己 演講的前三段到這裡結束 Toshio還準備了第四段 問大家聽了這麼久還願不願意聽下去 大家熱情地回應願意繼續聽之後 沒想到映入眼簾 一張張的slide裡頭 已經看不到任何technology的蛛絲馬跡 全是他和小女兒一起用紙片做玩具 做動畫 佈置家的照片 整個很家的感覺 整個很溫馨 當然他是為了是要告訴大家 真正重要的 還是位在原點的 感覺的過程 想像的過程 發揮創意的過程 動手做的過程 這些其實我們也知道的claim 但是這番話講完之後 我強烈的感覺是 interactive media 這個領域 在日本超前之後 美國永遠追不上了 只會被越拉越遠 原因是 美國在文化層面上 某種程度上是有limit的 以前美國走在前面 是因為當年他們有電腦 而別人沒有 或者當年他們有網路 而別人沒有 所以他們可以玩出名堂 成為領導者 反觀今日 不說電腦 手機 RFID 這些所有的technology都產自日本 大陸 台灣 或者印度 再加上美國沒有東方深厚的文化基礎 在這個領域似乎只能眼睜睜看著自己被遠遠拋在後頭 我想 很難有捲土重來的一天 畢竟創作這件事 和文化是密不可分的 當然bio tech 或者有機材料等更新穎的科技 會成為下一波掌握在美國人手裡的利器 畢竟他們熱愛開創的民族性居世界之首 但像siggraph emerging technology這樣的展場 只會被越來越多 越來越多的東方面孔所佔據 台灣 有什麼自己的東西 是別人沒有的 我們老是在聊 卻也一直都找不到答案 也許你說的對 要真的像黃聲遠建築師一樣 花十幾年不離開宜蘭一步 持之以恆地去做 才有辦法真的體會出 並且講清楚 我們自己的東西是什麼 坐在這裡空想 永遠也想不出個所以然 也許真的是這樣吧 Less |
Storytelling is about making connections. That is, a narration process is in fact a series of endless decision making processes, each of which concerning about this question, "W ... More Storytelling is about making connections. That is, a narration process is in fact a series of endless decision making processes, each of which concerning about this question, "Which are the two story segments to be bridged?" This question should be answered according to a huge amount of criteria: it has to be smooth in terms of the appearance, it has to make sense in terms of causality, it has to be consistent with what the audience know and what they don't know yet, ... etc. Nevertheless, whatever kind of media format is used (e.g., text, audio, video, and so on), the nature of the activity in which the storyteller is engaged is the same. It's all about making decisions for the connections. But the thing is that, the granularity of both the story segments and the connections between them varies tremendously across different media types. In textual storytelling activities, the narration goes only within the text domain, where abstraction or abbreviation can be easily made (since text expresses *semantics* but no *senses*), so the segments' granularity can be large. In other words, the storyteller can leave the details blank, and the audience can fill them in by themselves using their own imagination. For example, the sentence "The man in a black suit and a hat slowly walked in, and stepped on the old, wooden floor" can be shortened as "The man in a black suit and a hat walked in," or even as simply as "The man came in." Even though information will be lost when abstraction or abbreviation is made, we often do this because we can focus more on the flow or evolvement of the story. After all, it is too bothersome if we have to detail everything in the stories. On the other hand, if we consider the other extreme, using video as a media type for storytelling requires heavily detailed information, because video provides both visual and audio senses. Video makers need to handle - at every single moment within the video artifact - how it looks and how it sounds. As a result, the granularity of the stories' building blocks and the connections between them becomes much finer, and the criteria involved in the decision become much more as well (e.g. the correlation between the spoken words and the shown image, whether the video or audio is carrying the story plot, how to juxtapose back and forth between two related scenes along the same background audio, etc). Therefore, for instance, a building block in this problem domain might be a 0.5 sec video, or a 2 sec audio. So what I'm trying to say right here is, if we really want to deal with the problem of video-based storytelling, then we will really need to look at this making-decision-for-connecting-these-fine-granule problem. Otherwise, the task will by no means different from simply dealing with textual stories. But how are we gonna use commonsense computing or any AI techniques to do it? Since the most advanced techniques right now are all functioning in the text domain, one way of eperiment that we might try to do is to chop the materials into very fine segments, split the video and audio parts, and pose detailed annotations to all these granular building blocks. I understand that it might look pretty stupid cause no one would ever do think kind of work in the real world for practical usages. But ironically meanwhile there's something worth investigating here since nobody's ever done this kind of thing before, and there's no way we can tell by now how computer would be able to make use of this kind of materials to benefit the process of video-based storytelling. The other direction that I may go is to take the advantage of the experience that I have with videos and work on something that is relatively easier - weblogs. Blog is a kind of textual story. It is organized in successions, so it's time-aware - just like the story *progression* in video footages. One recent post may share related mindsets with other previous posts, so referring to them can be analogous to the process of juxtaposing semantically related footages all together as well. One of today's blogging software's mayjor defects is, in my personal opinion, that the viewing activity can only follow one axis, which is the chronology of the posts. There's no sense of story progression in terms of other story elements such as emotion, topics, questions, characters, and so forth. Using commonsene computing technology, we may be able to come up with a novel *storied navigation* theme in the world of blogs. Less |