archive-edu.com » EDU » C » COLUMBIA.EDU

Total: 442

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Side-channel attacks in web browsers: practical, low-cost, and highly scalable | Dept. of Computer Science, Columbia University
    privilege rings hypervisors and sandboxing The attack is possible because memory location information leaks out by timing cache events If a needed element is not in the cache a cache miss event for instance it takes longer to retrieve the data element This allows the researchers to know what data is currently being used by the computer To add a new data element to the cache the CPU will need to evict data elements to make room The data element is evicted not only from the L3 cache but from lower level caches as well To check whether data residing at a certain physical address are present in the L3 cache as well the CPU calculates which part of the cache cache set is responsible for the address then only checks the certain lines within the cache that correspond to this set allowing the researchers to associate cache lines with physical memory In timing events researchers were able to infer which instruction sets are active and which are not and what areas in memory are active when data is being fetched It s remarkable that such a wealth of information about the system is available to an unprivileged webpage says Oren While previous studies have been able to see some of the same behavior they relied on specially written software that had to be installed on the victim s machine What s remarkable here is that we see some of the same information using only a browser says Vasileios Kemerlis a PhD student who worked on the project now an Assistant Professor in the Computer Science Department at Brown University By selecting a group of cache sets and repeatedly measuring their access latencies over time the researchers were able to construct a very detailed picture or memorygram of the real time activity of the cache A memorygram of L3 cache activity Vertical line segments indicate multiple adjacent cache sets are active during the same time period Since consecutive cache sets within the same page frame correspond to consecutive addresses in physical memory it may indicate the execution of a function call spanning more than 64 bytes of assembler instructions The white horizontal line indicates a variable constantly being accessed during measurements and probably belongs to the measurement code or to the underlying JavaScript runtime Such a detailed picture is possible only because many web browsers recently upgraded the precision of their timers making it possible to time events with microsecond precision If memorygrams were fuzzier and less detailed it would not be possible to capture such small events as a cache miss Different browsers implement this new feature with different precisions High resolution timers have recently been added to browsers as a way to give developers especially game developers sufficient fine grained detail to know what processes might be slowing performance Of course the more information developers have the more information an attacker can access also Different processes have different memorygrams and the same is true for different websites their

    Original URL path: http://www.cs.columbia.edu/2015/spy-in-the-sandbox/ (2016-02-17)
    Open archived version from archive


  • Columbia CS class among first students to experiment with mobile DNA sequencing device | Dept. of Computer Science, Columbia University
    to identify suspects before they can flee destroy evidence or mount another attack Forensics is just the start Health related fields will also benefit when it becomes possible to quickly without waiting for lab results determine whether water towers harbor legionella detect the accumulation of pathogenic microbes in meat and perform diagnoses in the field during humanitarian crises Mobile DNA sequencing will have applications in the home also Erlich in his paper A Vision for Ubiquitous Sequencing to be published in Genome Research this fall even anticipates toilet systems equipped with DNA sequencing technology to trace the gut microbiome Mobile DNA sequencing is being made possible through a new generation of small hand held and low cost devices that plug into the USB drive of a PC and read DNA information in real time from samples taken on the spot The devices are still under development and are being selectively distributed to researchers for testing and evaluation Columbia students are among the first students to have access to these devices as part of their training Seven MinION TM devices are being provided for use in Ubiquitous Genomics at no cost by the British based manufacturer Oxford Nanopore after a request from Erlich The MinION DNA sequencing device from Oxford Nanopore Technologies Ltd The idea is to get the devices into the hands of the students to see what they can do with it both to imagine new applications and come up with new methods or algorithms to analyze the data Any new software developed will be open source licensed under GNU General Public License v2 Anyone wanting to use or modify the code must post it under the same license What happens when you give young smart students a new device with tremendous potential What are the new applications they

    Original URL path: http://www.cs.columbia.edu/2015/erlich-ubiquitous-genomics/ (2016-02-17)
    Open archived version from archive

  • Columbia computer scientists are presenting four papers at FOCS | Dept. of Computer Science, Columbia University
    that makes it possible to prove the lower bound The depth hierarchy theorem which is the paper s main result answers several long standing open questions in structural complexity theory and Boolean function analysis An average case depth hierarchy theorem for Boolean circuits received the Best Paper Award at FOCS 2015 On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms Xi Chen Columbia University Ilias Diakonikolas School of Informatics University of Edinburgh Anthi Orfanou Columbia University Dimitris Paparas Columbia University Xiaorui Sun Columbia University Mihalis Yannakakis Columbia University The authors of On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms show that even a relatively simple lottery pricing scenario presents a computational problem that one cannot hope to solve efficiently in polynomial time In economics lottery pricing is a strategy for maximizing a seller s expected revenue Unlike item pricing the seller does not fix the price of each item but offers the buyer lotteries at different prices according to a probability distribution For example the seller may offer the buyer at a certain price 50 chance of getting item 1 and 50 chance of getting item 2 Lotteries create for the seller a bigger space to price things and lottery pricing has been shown in some cases to achieve strictly higher revenues for the seller than the seller can get from item pricing The computational problem for the seller is to figure out an optimal menu or set of lotteries that maximizes the seller s revenue and the authors consider the case when there is a single unit demand buyer This setup is similar to the optimal item pricing problem that the authors examined last year in another paper Moving from item pricing to lottery pricing however completely changes the computational problem While item pricing is in some ways a discrete problem lottery pricing is essentially continuous given its linear program characterization that involves exponentially many variables and constraints Though it had been previously conjectured that the optimal lottery pricing problem is hard it required a delicate construction and a 30 page proof from the authors to finally obtain the hardness evidence That evidence is summarized in On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms where the authors show that even for a simple lottery scenario one seller and one unit demand buyer it s not possible to solve the pricing problem efficiently in polynomial time without adding in conditions or special cases Via a folklore connection the result also provides the same hardness evidence for the problem of optimal mechanism design under the same simple setup Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption Craig Gentry IBM T J Watson Research Center Allison Lewko Bishop Columbia University Amit Sahai University of California at Los Angeles Brent Waters University of Texas at Austin Abstract We revisit the question of constructing secure general purpose indistinguishability obfuscation with a

    Original URL path: http://www.cs.columbia.edu/2015/focs-2015/ (2016-02-17)
    Open archived version from archive

  • Steven Nowick awarded NSF grant to develop asynchronous on-chip interconnection networks | Dept. of Computer Science, Columbia University
    also a challenge Exacerbating everything is the traditional use of a fixed rate clock that centrally controls all components on a chip One promising alternative is to build plug and play computer systems that dispense entirely with a global clock and rely instead on asynchronous communications where individual components communicate with one another as needed Structured digital on chip interconnection networks called networks on chip NoC s are already an organizing backbone of many recent commercial parallel computers and embedded systems using synchronous approaches as well as recent forays into asynchronous approaches IBM s TrueNorth neuromorphic chip STMicroelectronics STHORM embedded processor While initial NoC solutions with asynchronous design have been promising and demonstrate an ease of assembly and scalability they lack fundamental features needed to make them viable for industry To explore and significantly advance plug and play systems for industrial applications the National Science Foundation established the grant An Asynchronous Network on Chip Methodology for Cost Effective and Fault Tolerant Heterogeneous SoC System on Chip Architectures This grant for 420 000 will fund several several significant new research directions in the area of asynchronous on chip networks and systems The grant s principal investigator is Steven Nowick a computer

    Original URL path: http://www.cs.columbia.edu/2015/Nowick-NSF-grant-async-chips/ (2016-02-17)
    Open archived version from archive

  • Vishal Misra goes before Indian Parliament Committee to present views on net neutrality | Dept. of Computer Science, Columbia University
    Facebook for instance might pay Airtel India s largest telecom operator an agreed upon amount to cover data charges for Facebook customers on Airtel For Facebook customers on networks other than Airtel Facebook would separately negotiate with those other providers On the surface zero rating seems like a good deal for consumers and Facebook has been especially active in promoting zero rating pitching it as an accessibility effort to make Internet access more affordable for the large percentage of Indian consumers who otherwise couldn t afford to pay for that access It is estimated that only around 19 of India s population has Internet access But as critics point out zero rating creates an uneven playing field that benefits large well funded companies that can afford to subsidize their customers data charges Their packets are free while packets from companies unable to afford zero rating payments are not It s a pricing structure that discriminates against mainly small companies and startups even if they offer superior services or innovative features While the exact definition of net neutrality is often debated Misra himself addresses this issue in What Constitutes Net Neutrality net neutrality is universally understood to prohibit service providers from discriminating among packets based on who the content is for Enacting the DoT s rule as it is written now gives an opening to Indian service providers to do just that Facebook and Google and other large well funded companies not surprisingly tend to support zero rating From public statements it can be assumed Facebook voiced its support for the policy before the Standing Committee on Information Technology Google declined its invitation to depose Testimony before the committee is confidential but Misra s support for net neutrality practices is well known not only from articles and statements he has made in the past but through his extensive research examining network neutrality from engineering networking and economic perspectives See On Cooperative Settlement Between Content Transit and Eyeball Internet Service Providers and The Public Option A Non regulatory Alternative to Network Neutrality This research in particular highlights the danger of anti competitive non neutral practices Whether it s ISPs favoring certain packets over others as in zero rating or a lack of competition in the last mile connection the effect is the same innovation suffers and service providers lose the incentive to improve service and keep prices low That zero rating should be banned seems to be shared by a growing segment of the Indian population The deadline for the public to comment on the government report originally set for August 15 had to be extended by five days to accommodate the huge number of comments almost all of which is favor banning zero rating The government s response to the DoT report and public comments is expected in two to three months Linda Crane Posted 8 21 2015 Tweet About Vishal Misra Vishal Misra is an Associate Professor in the Department of Computer Science at Columbia University He is credited with

    Original URL path: http://www.cs.columbia.edu/2015/Misra-Indian-parliament-net-neutrality/ (2016-02-17)
    Open archived version from archive

  • In US Senate testimony, Henning Schulzrinne offers technology solutions to unwanted calls | Dept. of Computer Science, Columbia University
    technology solutions which fall into roughly three categories filtering caller ID and name authentication and gateway blocking Each summarized below has its strong points and limitations For the full transcript of Schulzrinne s testimony go here Filtering Filtering either through a third party service or a downloaded app works by checking each incoming call against a white list of trustworthy phone numbers or a black list of nonacceptable ones compiled in one of several ways from FTC and FCC customer complaints crowd sourced by consumers or collected through honeypots Honeypots are stealth servers programmed to act like normal phones with numbers not assigned to any individual or company for the express purpose of capturing the phone numbers of robocallers Built in safeguards can ensure emergency alert calls get through as do calls placed from medical facilities unknown phone numbers can be verified by making callers prove that they are human rather than robotic Filtering today has several drawbacks It puts the onus on individuals and it protects only those who know about filtering and are willing to do the setup generally the most sophisticated people who are unlikely to fall for a scam in any case By protecting the people who least need it filtering today leaves the most vulnerable even more exposed Extending filtering to others is not currently easy Filtering works on many landlines and it is usually available only through large cable companies like Time Warner or Comcast that support external filtering services such as Nomorobo And filters are easily avoided by robocallers use of spoofing Caller ID and name authentication Spoofing is perhaps the most nefarious aspect of the scamming schemes almost anyone is likely to pick up when seeing the phone number of the local police department or the IRS Spoofing has other bad uses as well since a caller ID is often used to verify one s identity when gaining access to voicemail or when calling a bank utility or airline Preventing spoofing is necessary both to make filtering effective and to stop robocallers from impersonating others and Schulzrinne offered possible ways to do it One is to authenticate the originating number to ensure the caller is authorized to use the caller ID contained in the call setup message Authentication would require phone carriers to insert links to new cryptographic certificates so any carrier along the way could validate the signature and detect spoofed caller IDs These calls could then be labeled in some way or if the customer prefers rejected However it s not clear how much the phone carriers will do voluntarily For years carriers have resisted appeals to block robocalls claiming that federal law prohibits them as common carriers from doing so The FCC pulled the rug out from this excuse in a June 18 vote that explicitly states that phone companies are legally allowed to provide filtering to those customers who request it The FCC does not currently however obligate phone companies to provide filtering Using his deep knowledge of

    Original URL path: http://www.cs.columbia.edu/2015/Schulzrinne-senate-testimony-robocalls/ (2016-02-17)
    Open archived version from archive

  • Steven Nowick invited to present work on asynchronous on-chip networks at two national study groups | Dept. of Computer Science, Columbia University
    major benefits besides It saves power and energy since components when not busy are not activated at every clock cycle to remain synchronized Because of his research into asynchronous communication Nowick was invited to participate in two government sponsored workshops one for wireless networks and one for high performance parallel computing Each was a two day event where approximately 35 leading experts from industry and academia were invited to meet and help define the challenges and opportunities within the specific area The ultimate goal was to help guide future funding and research initiatives for US government agencies Asynchronous communication for networks The wireless workshop held this past March was sponsored by the National Science Foundation NSF and was entitled Workshop on Ultra Low Latency Wireless Networks While the NSF has previously hosted workshops on wireless networks this was the first time such a workshop explored both macro level networks such as Ethernet and micro level on chip networks i e networks on chips The idea was that those working in the relatively new area of on chip networks could learn from those working on macro level networks and vice versa One potential borrowing from macro level networks is the use of antennas to enable wireless networks on chips Putting micro antennas on a chip may seem a wild idea now but it has several obvious benefits it reduces the amount of wiring freeing up valuable real estate on the chip while removing a source of heat dissipation at the same time it avoids the problem of overloading wires during periods of heavy processing Wireless communication via antennas will also work in 3D chips to enable communications between layers Nowick was at the workshop to discuss his research on asynchronous communication and how it contributes to ease of assembling large networks on chips His presentation one of many directions explored by a cross section of industry and academia people invited to the workshop grew out of his work designing on chip networks but the concept of allowing components to operate independently has application for almost any complex system In fact he had made a similar presentation for an entirely different audience at a previous workshop not long before Asynchronous processing for high performance computing In August 2014 Nowick attended The System on Chip Design for High Performance Computing workshop which was sponsored jointly by the NSF DARPA DOE NASA and Sandia and Lawrence Berkeley National Laboratories all of which are faced with analyzing massive amounts of scientific data from astronomy data to nuclear weather social networks and oceanography data Processing the amount of data seen by these agencies is possible only through continued advances in high performance computing and being able to more efficiently parallelize tasks among many processing clusters Asynchronous communication has obvious application in handling the complexity of how data moves among multiple clusters But while people know how to build massively parallel computers today it takes expensive custom design and special tools Because very few companies have data

    Original URL path: http://www.cs.columbia.edu/2015/Nowick-invited-to-two-national-workshops/ (2016-02-17)
    Open archived version from archive

  • Machine learning applied to cancer: A PhD student doubles the number of breast cancer drivers | Dept. of Computer Science, Columbia University
    background alterations that have no ill effect For identifying significant SCNAs there are existing algorithms but most lack the necessary resolution to identify driver regions small enough so it becomes possible to identify the single gene responsible for a cancer Nor do SCNA detection algorithms take into account that the alteration rate can differ greatly among different genomic regions most including GISTIC2 compute a null distribution across the entire genome to estimate the significance of alterations meaning each region is independently evaluated against a global absolute Sanchez Garcia suspected some regions might not score highly against the global absolute score but would nevertheless be significantly more altered than adjacent chromosomal regions With help from Dylan Kotliar Bo Juen Chen and Uri David Akavia all in the Pe er Lab he created a new algorithm called ISAR Identification of Significantly Altered Regions specifically to compare an alteration only with nearby surrounding alterations This measure of local alteration became an additional data point for identifying possible regions harboring driver genes Applying this methodology to 785 breast cancer tumor samples taken from The Cancer Genome Atlas or TCGA ISAR identified 83 significant regions more than double the 30 regions previously reported Those regions found by ISAR included previously known regions as well as additional ones including several containing known oncogenes genes having the potential to cause cancer thus providing strong evidence that the algorithm was making accurate predictions Looking to the data With ISAR supplying a list of regions likely to harbor driver genes Sanchez Garcia and Pe er then moved to the next step identifying driver genes within those regions Standard classification approaches rely on an initial list of sample drivers and passengers to train the model The problem here is that the list of known drivers is relatively small and strongly biased also toward kinases and extreme phenotypes Sanchez Garcia opted for a different approach one that would look to the data itself to determine significant features This more data centric approach of course works best when data is plentiful which is not the case for any single cancer data source The solution was to incorporate different data sources measuring different aspects of each cancer to obtain a range of different data types One data source was the primary tumor data which is measured directly from patient tumors From this data source Sanchez Garcia obtained copy number point mutations and gene expression levels A second data source was from cell lines which is data originally derived from tumors but modified for lab study For cell lines he derived copy number gene expression levels and functional RNAi screening data which provides functional information about the genome by knocking out each individual gene in a cell line Together this diverse set of data produced a single candidate driver score While previous methods had also integrated different data sources particularly patient data and functional lab cell line they tended to look for intersection between data sources in effect narrowing the data by omitting non

    Original URL path: http://www.cs.columbia.edu/2015/machine-learning-applied-to-cancer/ (2016-02-17)
    Open archived version from archive