archive-edu.com » EDU » C » COLUMBIA.EDU

Total: 442

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • An interview with Kathy McKeown: Automatically describing disasters | Dept. of Computer Science, Columbia University
    a specific disaster and the sub events it spawns Each type of disaster is associated with a distinct vocabulary and we ll build language models to capture this information Obviously we ll look at established news sites for factual information To include first person stories it s not yet entirely clear where to look since there aren t well defined sites for this type of content We will be searching blogs and discussion boards and wherever else we can discover personal accounts For users and the intent is for anyone to be able to use the system we envision currently some type of browser type interface It will probably be visual and may be laid out by location where things happened Clicking on one location will present descriptions and give a timeline about what happened at that location at different times and each sub event will be accompanied by a personal account Newsblaster is already finding articles that cover the same event Will you be building on top of Newsblaster KM Yes after all Newsblaster represents 11 years of experience in auto generating summaries and it contains years of data though we will modernize it to include social media which Newsblaster doesn t currently do We also need to also expand the scope of how Newsblaster uses natural language processing Currently it relies on common language between articles to both find articles of the same event and then to produce summaries I m simplifying here but Newsblaster works by extracting nouns and other important words from articles and then measuring statistical similarity of the vocabulary in these articles to determine which articles cover the same topic In a disaster covering multiple days with multiple sub events there is going to be a lot less common language and vocabulary among the articles we want to capture A news item about flooding might not refer directly to Sandy by name it may describe the flooding only as storm related but we have to tie this back to the hurricane itself even when two articles don t share a common language There s going to be more paraphrasing also as journalists and writers to avoid being repetitive after days of writing about the same topic change up their sentences It makes it harder for language tools that are looking for the same phrases and words Determining semantic relatedness is obviously the key but we re going to need to build new language tools and approaches that don t rely on the explicit presence of shared terms How will Describing Disasters find personal stories KM That s one question but the more interesting question is how do you recognize a good compelling story people would want to hear Not a lot of people have looked at this While there is work on what makes scientific writing good recognizing what makes a story compelling is new research We re starting by investigating a number of theories drawn from linguistics and from literature on what type

    Original URL path: http://www.cs.columbia.edu/2014/describing-disasters/ (2016-02-17)
    Open archived version from archive


  • Want the attention of the audience? Introduce a new gesture. | Dept. of Computer Science, Columbia University
    data since it is a direct non intrusive measure of brain activity already known to capture information related to attention Electrodes were attached to the scalp of the 20 study participants to record brain activity while they watched debate clips The data capture was carried out in the lab of Paul Sajda a professor in the Biomedical department who also helped interpret the data EEG data is not easy to work with The signals are weak and noisy and the capture process itself requires an electrostatically shielded room The data was also voluminous 46 electrodes for 20 participants tracked for 47 minutes with 2000 data points per second and was reduced using an algorithm written for the purpose While the EEG data showed many patterns of activity there was a lot of underlying signal from which researchers identified three main components Two corresponded to specific areas of the brain the first to electrodes near the visual cortex with more activity suggesting that audience members were actively watching a candidate The second component was generated near the prefrontal cortex the site of executive function and decision making indicating that audience members were thinking and making decisions It was not possible to specify a source for the third component Once time stamped the EEG data averaged across subjects was aligned with gestures in the video so researchers could locate statistically significant correlations between a gesture feature direction change velocity and extremal pose and a strong EEG component Moments of engagement were defined as a common neural response across all subjects Responses not shared with other subjects might indicate lack of engagement i e boredom as each participant would typically focus on different stimuli Extremal gestures turned out to be the strongest indication of listeners engagement No matter how researchers subdivided the participants Democrats Republicans females males all what really triggered people s attention was something new and different in the way a candidate gestured Points far from home base correlated with heightened levels of listener attention here Romney s left hand in the first debate This finding that extremal poses correlate with audience engagement should help speakers stress important points And it may also provide an automatic way to index video an increasingly necessary task as the amount of video continues to explode Video is hard to chunk meaningfully and video indexing today often relies on a few extracted images and standard fast forwarding and reversing An algorithm trained to find extremal speaker gestures might quickly and automatically locate video highlights Students of online courses for example could then easily skip to the parts of a lecture needing the most review More work is planned The researchers looked only at correlation leaving for future work the task of prediction where some EEG data is set aside to see if it s possible to use gestures to predict where there is engagement Whether certain words are more likely to get audience reaction is another area to explore In a small step in this

    Original URL path: http://www.cs.columbia.edu/2014/gestures/ (2016-02-17)
    Open archived version from archive

  • Vishal Misra
    MC 0401 New York NY 10027 7003 Email misra cs columbia edu Phone 212 939 7061 Fax 212 666 0140 Education Ph D University of Massachusetts May 2000 M S in Electrical Engineering Department of Electrical and Computer Engineering University of Massachusetts May 1996 B Tech in Electrical Engineering Department of Electrical Engineering Indian Institute of Technology Bombay May 1992 Recent Professional Activities Associate Editor Journal of the ACM Program

    Original URL path: http://www.cs.columbia.edu/~misra/ (2016-02-17)
    Open archived version from archive

  • Vishal Misra's Publications
    Vishal Misra Home Students Publications Teaching Beyond Papers

    Original URL path: http://www.cs.columbia.edu/~misra/publish.html (2016-02-17)
    Open archived version from archive

  • Jonathan L. Gross
    for Bell Laboratories and for IBM These include mathematical methods for performance evaluation at the advanced level and for developing reusable software at a basic level He has received several awards for outstanding teaching at Columbia University including the career Great Teacher Award from the Society of Columbia Graduates His most recent books are Topics in Topological Graph Theory co edited with Tom Tucker and series editors Lowell Beineke and Robin Wilson and Combinatorial Methods with Computer Applications Other books include Topological Graph Theory co authored with Thomas W Tucker Graph Theory and Its Applications co authored with Jay Yellen and the Handbook of Graph Theory co edited with Jay Yellen Another previous book Measuring Culture co authored with Steve Rayner constructs network theoretic tools for measuring sociological phenomena Prior to Columbia University Professor Gross was in the Mathematics Department at Princeton University where he worked with Ralph Fox His undergraduate work was at the Massachusetts Institute of Technology His Ph D thesis on 3 dimensional topology at Dartmouth College solved a published problem of Fields Medalist John Milnor Research Interests Topological graph theory especially genus distribution knot theory especially Celtic knots computer graphics especially woven shapes covering space methods especially voltage graphs mathematical models for social anthropology especially the grid group theory of Mary Douglas topology of 3 manifolds Research Publications Journal Publications Supplementary Research Material Current Graduate Students Imran F Khan Mehvish I Poshni Miscellaneous Resources graph theory Genus Distribution Calculator Monographs and Edited Volumes Topological Graph Theory with T W Tucker Wiley Interscience 1987 Paperback edition by Dover Publications 2001 Graph Theory and Its Applications with J Yellen CRC Press 1999 Second edition 2006 Handbook of Discrete and Combinatorial Mathematics Associate Editor with R H Rosen and D Shier CRC Press 2000 Handbook of Graph Theory co

    Original URL path: http://www.cs.columbia.edu/~gross/ (2016-02-17)
    Open archived version from archive


  • the subway because they all do the same thing If you ever encounter a textbook on discrete math that doesn t count the exterior face of a graph when counting faces burn the book Then capture the author and burn him too When I say a baby level proof that s just how mathematicians talk I don t actually know any babies that can do algebraic topology Negativebplusorminusthesquarerootofbsquaredminusfouracovertwoa You have to say it very quickly or you ll get it wrong When you don t know what I m doing in lecture you can be pretty sure it s self parody I m not quite sure when this happened but it was so long ago that I can t turn it off If I ever fail to overstate the case please call an ambulance I often get confused when I try to do several things simultaneously In fact I sometimes get confused when I try to do one thing simultaneously I wish I had a surefire way to avoid writing errors in my coursenotes I once had a fantasy about inventing a computer language that branches on intent I have no idea what liquid soap will make your dishes sparkle

    Original URL path: http://www.cs.columbia.edu/~gross/things_I_said.html (2016-02-17)
    Open archived version from archive

  • Right time, right place: A collaborative approach for more accurate context-awareness in mobile apps and ads | Dept. of Computer Science, Columbia University
    calling that work It s an intuitive approach but it lacks flexibility not everyone has the same schedule and it ignores the commute which can be a significant amount of time for some people and a missed opportunity for those businesses located along the commute Rather than imposing a static temporal framework collaborative place models learn the quantitative relationship between week hours by inferring similarities across all users relying on Bayesian estimation techniques to do so With a global temporal framework thus set the relevance of the sparser latitude longitude GPS coordinates from individual users can then be determined from how they fit into the global temporal pattern In this way the model re constructs a particular user s home work commuting schedule even though a user might have been observed only at Thursday 3pm and Monday 1pm To prove the concept the researchers tested the model using two real world data sets a sparse one collected from a mobile ad exchange and a dense data set from a cellular carrier In both cases the only inputs were user IDs latitudes longitudes and time stamps Data was anonymized by removing all personal information With data aggregated across all users a strong global temporal pattern emerged fairly quickly one that contained within it several temporal clusters correlated with work morning and evening commutes leisure times after work and sleeping at night With the global pattern thus established the individual spatiotemporal patterns of individual users became apparent even with few data points associated with each user The spatial extent of place types associated with temporal clusters were determined by replacing multiple observations logged during the same hour with their geometric median computed using Weiszfeld s algorithm and by clustering nearby points using a Gaussian mixture model that is a subcomponent of the collaborative place model This contrasts with the use of averaging in other place models to handle redundant observations and the noise that occurs from GPS errors and from having multiple cell towers covering the same location by not averaging the collaborative place model avoids the strange results sometimes caused by deviations in the regular routine such as a late work evening or a night or weekend away from home Flexibility was built into the model by allowing users to have varying numbers of places or week hours This flexibility turned out to be key an early simpler prototype that constrained users to have the same week hour distribution performed worse than a baseline model The strong well defined pattern on the left results from combining global weekly patterns with spatiotemporal data of an individual user arbitrarily chosen from the dense dataset The right distribution for the same user represents a previous baseline model that did not infer global patterns and so as not able to correctly identify important places In the end data by itself was enough to reliably assess a user s spatiotemporal schedule Without the need to label or average location places the collaborative approach of combining global

    Original URL path: http://www.cs.columbia.edu/2015/collaborative-place-models/ (2016-02-17)
    Open archived version from archive

  • Exploring the acoustic nature of objects: What a zoolophone reveals | Dept. of Computer Science, Columbia University
    Finding the right shape for the right sound While previous algorithms attempted to optimize either amplitude loudness or frequency the zoolophone required optimizing both simultaneously Creating realistic and pleasing musical sounds also required adding overtones secondary frequencies higher than the main one and that contribute to the timber associated with notes played by a professionally produced instrument A note by itself a single frequency would sound dull and plain otherwise The end to end method works like this the user specifies a shape in this case a zoo animal and the desired sound as well as a contact spot where the shape is to be struck From this input researchers construct variations of the zoo animal in mesh form if the desired animal is a lion every conceivable shape and configuration of a lion is considered tall lions short lions lions with a very long tail and others with a normal length tail and some others with every tail length in between It s an immense search space and searching it for the most optimal shape that when struck produces the wanted sound proved to be the core computational difficulty The zoolophone starts with an object and the desired sounds and then taking into account the input material finds the most optimal shape to achieve the desired sound While there are a number of different optimization algorithms for intelligently searching and sampling from within a large space existing ones proved too slow and too apt to fall into a local minima that is the algorithm converges on the most optimal value within a subarea of the entire search space without having explored the entire space for an even more optimal one To increase the chances of finding the most optimal shape and doing so quickly Zheng and his colleagues proposed a new and fast stochastic optimization method Latin Complement Sampling This new method ensures samples are taken evenly from across a particular subarea of the entire search space while using the information learned from the previous search to ensure that the unexplored subareas most likely to contain optimal values are not neglected Latin Complement Sampling outperformed all other alternative optimizations and could be used in a variety of other problems A second task was to accurately predict the sound each shape will produce when struck However much of this work had already been done In Toward High Quality Modal Contact Sound Zheng and Doug James describe a method for simulating a sound given a shape and material The current zoolophone project adapts this work by inverting it given a sound what shape can produce it The optimal shape once found is then perforated again in an automatic way to suppress unwanted overtones keeping only those that enhance the sound New original musical instruments become possible Creating animal or other specially shaped keys would be prohibitively difficult and time consuming using traditional manual methods Connecting sound characteristics with an object s shape is not intuitive and manual methods work by starting

    Original URL path: http://www.cs.columbia.edu/2015/zoolophone-shows-how-to-control-vibrations/ (2016-02-17)
    Open archived version from archive