2013 International Summer School on
Trends in Computing

Tarragona, Spain, July 22-26, 2013

Course Description


Shun-ichi Amari, Riken, [introductory] Information Geometry and Its Applications

Amari

Summary. Information geometry emerged from the study of the geometrical structure of a manifold of probability distributions. It defines the distance between two probability distributions, and also flatness of a subfamily of probability distributions by using modern differential geometry. Therefore, it is useful not only for statistical inference but also for information sciences of stochastic nature such as machine learning, pattern recognition and optimization. The present course does not require any knowledge on differential geometry, because manifolds treated here are dually flat, so that linear algebra and the Legendre duality is enough for understanding information geometry. The invariant structure of a manifold of probability distribution is introduced in the beginning. It consists of a metric due to the Fisher information, and a dual pair of affine structure. We briefly explain how useful they are for understanding statistical inference. Most typical manifolds are dually flat. A dually flat manifold has a dual pair of convex functions connected by the Legendre transformation. Therefore, information geometry is useful for convex analysis. We explain many applications in machine learning, computer vision, optimization, pattern recognition and neural networks.

References

Bio. Shun-ichi Amari received Dr. Eng. degree from the University of Tokyo in 1963. He had worked at the University of Tokyo and is now Professor-Emeritus. He served as Director of RIKEN Brain Science Institute for five years, and is now its senior advisor. He has been engaged in research in wide areas of mathematical engineering, in particular, mathematical foundations of neural networks, including statistical neurodynamics, dynamical theory of neural fields, associative memory, self-organization, and general learning theory. Another main subject of his research is information geometry initiated by himself, which provides a new powerful method to information sciences. Dr. Amari served as President of International Neural Networks Society and President of Institute of Electronics, Information and Communication Engineers, Japan. He received Emanuel A. Piore Award and Neural Networks Pioneer Award from IEEE, the Japan Academy Award, Order of Cultural Merit of Japan, Gabor Award, Caianiello Award, Bosom Friend Award from Chinese Neural Networks Council, and C&C award, among many others.


James Anderson, Chapel Hill, [intermediate] Scheduling and Synchronization in Real-Time Multicore Systems

Anderson

Summary. The recent advent of multicore technologies has led to a surge of new research on real-time scheduling algorithms and synchronization protocols for multiprocessor systems.  The main goal of this course is to survey this research, with a particular emphasis on synchronization.  The first real-time synchronization protocols for multiprocessor systems were proposed 20+ years ago.  In recent research on this topic, new protocols have been proposed that are the first to support fine-grained lock nesting, and the first to be asymptotically optimal from the perspective of priority-inversion blocking.  This course will enable students to understand these newer protocols and how they build upon earlier work.  The correctness of a real-time synchronization protocol hinges on the scheduling algorithm that is used.Scheduler alternatives for real-time multiprocessor systems will be covered at a level sufficient to understand the synchronization-related material that is presented.  Real-time multiprocessor synchronization is an active area of research for which many opportunities exist for publishing new work.  Thus, Ph.D. students who are seeking a dissertation topic might find this course beneficial.

References

Bio. James H. Anderson is a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill.  He received a B.S. degree in Computer Science from Michigan State University in 1982, an M.S. degree in Computer Science from Purdue University in 1983, and a Ph.D. degree in Computer Sciences from the University of Texas at Austin in 1990.  Prof. Anderson received the U.S. Army Research Office Young Investigator Award in 1995, was named Alfred P. Sloan Research Fellow in 1996, and was selected as an IEEE Fellow in 2012.  He won the UNC Computer Science Department's teaching award in 1995, 2002, 2005, and 2012.  Prof. Anderson's main research interests are within the areas of concurrent and distributed computing and real-time systems. He has authored over 190 papers in these areas.


Pierre Baldi, Irvine, [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences

Baldi

Summary. This short course will explore the rich, two-way, interplay between the life- and the computational-sciences and the informatics challenges and opportunities created by big data generated by a variety of high-throughput technologies, such DNA sequencing, and the Web. Data examples will be given at multiple scales, from small molecules in chemoinformatics and drug discovery, to proteins, to genomes and beyond. The Bayesian statistical framework and statistical machine learning methods will be described as key tools to address some of these challenges together with integrative systems biology approaches that can combine information from multiple sources. To close the loop we will show how biology inspires machine learning methods, such as deep architectures and deep learning. The methods will be applied to difficult problems such as predicting the structure of proteins or understanding gene regulation on a genome-wide scale. If time permits, we will briefly survey related societal issues from genome privacy to personalized medicine.

References

Bio. Pierre Baldi is Chancellor's Professor in the Department of Computer Science, Director of the Institute for Genomics and Bioinformatics, and Associate Director of the Center for Machine Learning and Intelligent Systems at the University of California, Irvine. He received his PHD degree from the California Institute of Technology. His research work is at the interface of the computational and life sciences, in particular the application of artificial intelligence and statistical machine learning methods to problems in chemoinformatics, proteomics, genomics, systems biology, and computational neuroscience. He is credited with pioneering the use of Hidden Markov Models (HMMs), graphical models, and recursive neural networks in bioinformatics. Dr. Baldi has published over 260 peer-reviewed research articles and four books and has a Google Scholar h-index of 64. He is the recipient of the 1993 Lew Allen Award, the 1999 Laurel Wilkening Faculty Innovation Award, a 2006 Microsoft Research Award, and the 2010 E. R. Caianiello Prize for research in machine learning. He is Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronics Engineers (IEEE).


Yoshua Bengio, Montréal, [introductory/intermediate] Deep Learning of Representations

Bengio

Summary. Machine learning has become one of the basic tools of computer science and AI research because it allows computers to exploit data in order to build up the knowledge required for creating useful products, taking good decisions and understanding the world around us. The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data.  Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms. We view the ultimate goal of these algorithms as disentangling the unknown underlying factors of variation that explain the observed data.This presentation reviews the basics and motivations of feature learning and deep learning, as well as recent work relating these subjects to probabilistic modeling and manifold learning.  An objective is to raise questions and issues about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning. Although these questions may seem abstract, they have already been applied industrially with impressive success and breakthroughs in speech recognition, computer vision, language modeling, machine translation, computational advertising, computational music, recommendation systems, search engines, and more. But there is yet more to come by continuing to focus on the remaining long-term challenges, with which this presentation will be concluded.

Bio. Yoshua Bengio received his Ph. D. in Computer Science from McGill University in 1991.  After two post-doctoral years, one at M.I.T. and one at AT&T Bell Laboratories, he became a Professor in the Department of Computer Science and Operations Research at Université de Montréal.  He is the author of two books and of more than 150 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, and pattern recognition.  Since 2000, Dr. Bengio has held a Canada Research Chair in Statistical Learning Algorithms, and he also holds an NSERC Industrial Chair. Dr. Bengio is a recipient of the Urgel-Archambault 2009 prize, a Fellow of the Canadian Institute for Advanced Research, a Fellow of the Centre Inter-universitaire de Recherche en Analyse des Organisations (CIRANO), Action Editor of the Journal of Machine Learning Research, Associate editor of Foundations and Trends in Machine Learning and of Computational Intelligence, and former Associate Editor of the Machine Learning journal and of the IEEE Transactions on Neural Networks. He is the founder and head of the Laboratoire d’Informatique des Systèmes Adaptatifs, counting 4 faculty dedicated to deep learning as well as around 30 researchers. Dr. Bengio's current interests include fundamental questions on learning deep architectures, the geometry of generalization in high-dimensional spaces, biologically inspired learning algorithms, and challenging applications of statistical machine learning in artificial intelligence tasks. At the beginning of 2013, Google Scholar finds almost 12,000 citations to his work, yielding an h-index of 47.


Stephen Brewster, Glasgow, [advanced] Multimodal Human-Computer Interaction

Brewster

Summary. The aim of this course is to provide a background into the area of multimodal human-computer interaction, or how we can use human sensory and control capabilities to improve the interaction with technology. Humans have a wide range of different capabilities but these are not fully used when interacting with computers. During the course, we will cover the main technical aspects of the different technologies available, how to design using them and practical applications of their use.

The course will provide a background into the use of speech for interaction and the technical and usability issues that need to be considered in order to design an effective speech user interface. We will also cover the use of non-speech sounds, such as Earcons, Auditory Icons, sonification and 3D sound , and how they can be used to augment a user interface. Examples will be given to how they can be used to best effect.

The next part of the course will cover haptics, or using the sense of touch for interaction. We will focus on three aspects. The first is the use of kinaesthetic, or force-feedback, devices for interaction. These can be used to simulate the feeling of real physical objects. We will discuss some of the technologies available and how they can be used for interaction, including applications in medicine and users with disabilities. The second topic is cutaneous haptics, or using the skin for interaction. In this case, we will look at how we can use thermal feedback, pressure input and vibrotactile displays. The third aspect will focus on gesture interaction and why these can be very beneficial for interaction. We will cover gestures done on devices, gestures done with devices and gestures done with other parts of the body.

The final section of the course will focus on mobile interaction, bringing together some of the different input and output techniques described in the earlier sections. Mobile device user interfaces are heavily based on small screens and keyboards. These can be hard to operate when on the move which limits the applications and services we can use. This talk will look at the possibility of moving away from these kinds of interactions to ones more suited to mobile devices and their dynamic contexts of use where users need to be able to look where they are going, carry shopping bags and hold on to children at the same time as using their phones. We will also cover some of the issues of social acceptability of these new interfaces; we have to be careful that the new ways we want people to use devices are socially appropriate and don’t make us feel embarrassed or awkward.

Bio. Stephen Brewster is Professor of Human-Computer Interaction (HCI) in  the School of Computing Science at the University of Glasgow. He leads the Multimodal Interaction Group, which has a world leading reputation in designing novel user interfaces, particularly for mobile and touchscreen devices. His focus is on multimodal interaction, using multiple sensory modalities (particularly audio, haptics, smell and gesture) to create richer interactions between human and computer. He has published over 250 papers in top international conferences and journals.

His work has a strong experimental focus, applying perceptual research to practical situations. A current focus is on mobile interaction and how we can design better user interfaces for users who are on the move. This has involved designed novel interactions for touchscreen devices, body based gestures and gait analysis. He has studied the use of audio and haptic feedback to enable mobile users to get feedback from their devices while their attention is on the environment around them.


Bruno Buchberger, Linz, [introductory] Groebner Bases: An Algorithmic Method for Multivariate Polynomial Systems. Foundations and Applications

Buchberger

Summary. Do you know how to solve a univariate polynomial equation like 2x2 – 3x + 5 = 0? And how about solving several linear equations like x1 – 2x2 – 7 = 0, x1 – 3x2 + 17 = 0? Of course you know. Do you know how to solve arbitrary systems of multivariate polynomial equations? I.e. systems in any number of unknowns x1, x2, …, xn, with any number of equations and – most importantly – with arbitrary exponents occurring with the variables, e.g. x13 x2 x32 ? Maybe you know how to solve such systems in special cases or you know how to solve them “approximately” (by numerical methods like Newton’s method). In this course, I will teach you a method – called Gröbner bases – which is completely general, provides exact solutions, decides exactly whether or not a solution exists at all, whether or not finitely or infinitely many solutions exist, computes all solutions if only finitely many exist, and gives you parameter presentations in case infinitely many solutions exist.

The Gröbner Bases method is now available in all mathematical software systems like Mathematica, Maple, etc. Since multivariate non-linear equations are everywhere in the natural sciences and engineering, a whole new range of problems will be open to you if you know the Gröbner bases method.

In addition, besides solving systems, Gröbner bases can be used for numerous, seemingly unrelated problems of artificial intelligence, software engineering, cryptography, puzzle solving, computations with graphs, gaming, computer graphics, theorem proving etc. In this crash course on Gröbner bases all this will be presented both in theory and by doing practical work using Mathematica. (If possible, install Mathematica on your laptop. Most universities provide cheap students licenses!)

References

Bio. Buchberger is full professor of Computer Mathematics at the Research Institute for Symbolic Computation (RISC) of the Johannes Kepler University in Linz, Austria.

Buchberger is best known for the invention of the theory of Gröbner bases. For his Gröbner bases theory, he received the prestigious ACM Kanellakis Award 2007, see http://awards.acm.org/kanellakis/, he was elected (1991) member of the Academia Europea (London) and he received five honorary doctorates.

His current main research topic is automated mathematical theory exploration (the "Theorema Project"). This project aims at the (semi–)automation of the mathematical invention and verification process, see http://www.risc.jku.at/research/theorema/description/.

Buchberger is founder of

  • RISC, the Research Institute for Symbolic Computation (1987), see http://www.risc.jku.at
  • The Journal of Symbolic Computation (1985), see http://www.journals.elsevier.com/journal-of-symbolic-computation/
  • The Softwarepark Hagenberg (1990), a technology center with over 1000 R&D coworkers, see http://www.softwarepark-hagenberg.com
  • Buchberger's general homepage: http://www.brunobuchberger.com



    Rajkumar Buyya, Melbourne, [intermediate] Cloud Computing

    Buyya

    Summary. Computing is being transformed to a model consisting of services that are commoditised and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. In such a model, users access services based on their requirements without regard to where the services are hosted. Several computing paradigms have promised to deliver this utility computing vision. Cloud computing is the most recent emerging paradigm promising to turn the vision of "computing utilities" into a reality. Cloud computing has emerged as one of the buzzwords in the IT industry. Several IT vendors are promising to offer storage, computation and application hosting services, and provide coverage in several continents, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. It delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. The price that Cloud Service Providers charge can vary with time and the quality of service (QoS) expectations of consumers.

    This tutorial will cover (a) 21st century vision of computing and identifies various IT paradigms promising to deliver the vision of computing utilities; (b) the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; (c) market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (d) Aneka, a software system for rapid development of Cloud applications and their deployment on private/public Clouds with resource provisioning driven by SLAs and user QoS requirements, (e) experimental results on deploying Cloud applications in engineering, gaming, and health care domains (integrating sensors networks, mobile devices), ISRO satellite image processing on elastic Clouds, and (f) need for convergence of competing IT paradigms for delivering our 21st century vision along with pathways for future research.

    Bio. Dr. Rajkumar Buyya is Professor of Computer Science and Software Engineering, Future Fellow of the Australian Research Council, and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft, a spin-off company of the University, commercializing its innovations in Cloud Computing. He has authored over 425 publications and four text books including "Mastering Cloud Computing" published by McGraw Hill and Elsevier/Morgan Kaufmann, 2013 for Indian and international markets respectively. He also edited several books including "Cloud Computing: Principles and Paradigms" (Wiley Press, USA, Feb 2011). He is one of the highly cited authors in computer science and software engineering worldwide (h-index=68, g-index=141, 22000+ citations). Microsoft Academic Search Index ranked Dr. Buyya as as the world's top author in distributed and parallel computing between 2007 and 2012. Recently, ISI has identified him as a “Highly Cited Researcher” based on citations to his journal papers.

    Software technologies for Grid and Cloud computing developed under Dr. Buyya's leadership have gained rapid acceptance and are in use at several academic institutions and commercial enterprises in 40 countries around the world. Dr. Buyya has led the establishment and development of key community activities, including serving as foundation Chair of the IEEE Technical Committee on Scalable Computing and five IEEE/ACM conferences. These contributions and international research leadership of Dr. Buyya are recognized through the award of "2009 IEEE Medal for Excellence in Scalable Computing" from the IEEE Computer Society, USA. Manjrasoft's Aneka Cloud technology developed under his leadership has received "2010 Asia Pacific Frost & Sullivan New Product Innovation Award" and "2011 Telstra Innovation Challenge, People's Choice Award". He is currently serving as the foundation Editor-in-Chief (EiC) of IEEE Transactions on Cloud Computing. For further information on Dr. Buyya, please visit his cyberhome.


    Jan Camenisch, IBM Zurich, [intermediate] Cryptography for Privacy

    Camenisch

    Summary. Using the Internet and other electronic media for our daily tasks has become common. Thereby a lot of sensitive information is exchanged, processed, and stored at many different places. Once released, controlling the dispersal of this information is virtually impossible. Worse, the press reports daily on incidents where sensitive information has been lost, stolen, or misused - often involving large and reputable organizations. Privacy-enhancing technologies can help to minimize the amount of information that needs to be revealed in transactions, on the one hand, and to limit the dispersal, on the other hand. Many of these technologies build on common cryptographic primitives that allow for data to be authenticated and encrypted in such a way that it is possible to efficiently prove possession and/or properties of data revealing the data or side-information about it. Proving such statements is of course possible for any signature and encryption scheme. However, if the result is to be practical, special cryptographic primitives and proof protocols are needed. 

    In this course we will first consider a few example scenarios and motivate the need for such cryptograph building block before we then present and discuss these. We start with efficient discrete logarithms based proof protocols often referred to as generalized Schnorr proofs. They allow one to prove knowledge of different discrete logarithms (exponents) and relations among them. Now, to be able to prove possession of a (valid) signature and a message with generalized Schnorr proofs, it is necessary that the signature and the message signed are exponents and that no hash-function is used in the signature verification. Similarly, for encryption schemes, the plain text needs to be an exponent. We will present and discuss a number of such signature and encryption schemes.

    To show the power of these building blocks, we will consider a couple of example protocols such as anonymous access control and anonymous polling. We then conclude with a discussion on security definition and proofs. We hope that the presented building blocks will enable many new privacy-preserving protocols and and applications in the future.

    Bio. Jan Camenisch received a Diploma in Electrical Engineering in 1993 and a Ph.D. in Computer Science in 1998 both from ETH Zurich. From 1998 until 1999 he has been Research Assistant Professor in Computer Science at the University of Aarhus, Denmark. Since 1999 he is Research Staff Member and project leader at IBM Research -- Zurich. He was also the technical leader of the EU-funded projects PRIME (prime-project.eu) and PrimeLife (primelife.eu) which both contributed towards making on-line privacy a reality. His research interests include public key cryptography; cryptographic protocols, in particular those supporting privacy and anonymity; practical secure distributed computation; and privacy-enhancing technologies.



    Larry S. Davis, College Park, [intermediate] Video Analysis of Human Activities

    Davis

    Summary. This course will cover topics related to detection, tracking and analysis of human movements and activities in surveillance video – video taken from a stationary although possibly active (pan/tilt/zoom) camera or a network of such cameras. Some of the algorithms presented are applicable to videos taken from moving cameras also. The course will be divided into six one hour modules, one each covering the following topics:

    1. General introduction to video analysis and visual surveillance
    2. Detecting people in video – background modeling and foreground detection, modeling human appearance and human detection
    3. Tracking people and their body parts – specifically in this module, tracking a single person
    4. Multiple person tracking
    5. Human movement modeling and recognition of individual human actions
    6. Multi-agent activity modeling and recognition

    Bio. Larry S. Davis received his B.A. from Colgate University in 1970 and his M. S. and Ph. D. in Computer Science from the University of Maryland in 1974 and 1976 respectively. From 1977-1981 he was an Assistant Professor in the Department of Computer Science at the University of Texas, Austin. He returned to the University of Maryland as an Associate Professor in 1981. From 1985-1994 he was the Director of the University of Maryland Institute for Advanced Computer Studies. He was Chair of the Department of Computer Science from 1999-2012. He is currently a Professor in the Institute and the Computer Science Department, as well as Director of the Center for Automation Research. He was named a Fellow of the IEEE in 1997 and of the ACM in 2013.

    Prof. Davis is known for his research in computer vision and high performance computing. He has published over 100 papers in journals and 200 conference papers and has supervised over 35 Ph. D. students. During the past ten years his research has focused on visual surveillance and general video analysis. He has served as program or general chair for most of the field's major conferences, including the 5’th International Conference on Computer Vision, the 2004 Computer Vision and Pattern Recognition Conference, the 11’th International Conference on Computer Vision held in 2006, and the 2010 Computer Vision and Pattern Recognition Conference. He is the co-General chair of the 2013 International Conference on Computer Vision to be held in Sydney in December 2013.


    Paul De Bra, Eindhoven, [intermediate] Adaptive Systems

    De Bra

    Summary. In this course we study the principles of and technology for creating adaptive systems for “information delivery” (not for controlling devices like e.g. the transmission of a car or the temperature control in a house or office building). We focus on user-adaptive systems, used for automatic personalization of websites, e.g. for on-line courses, museums, entertainment or news. The course covers topics of user modeling and adaptation. Both will be also practiced using the GALE (Generic Adaptation Language and Engine) developed in the European project GRAPPLE.

    References

    Bio. Paul De Bra obtained a PhD in Mathematics and Computer Science from the University of Antwerp in 1987. He was a post-doctoral researchers at Bell Laboratories in Murray Hill, New Jersey in 1988 and 1989. From December onwards he has been working at the Eindhoven University of Technology, first as associate professor, later as full professor. (From 1987 to 2007 he was also part-time professor at the University of Antwerp.) Paul De Bra performs research on different aspects of information system, lately focusing on adaptation in web-based information sources and –systems. He coordinated the European (FP7) project GRAPPLE on adaptation and personalization in learning environments.


    Paul Dourish, Irvine, [introductory] Ubiquitous Computing in a Social Context

    Dourish

    Summary. Ubiquitous computing -- the vision of a world of computational seamlessly interwoven with the objects and activities of everyday life -- is a program of research now more than two decades old, and yet seemingly still tantalisingly beyond reach. One of the challenges of ubiquitous computing is that it is both a technological phenomenon and a socio-cultural one; that is, as ubicomp researchers imagine imagine potential technological interventions, they also imagine social arrangements that make those interventions make sense. In this course, we will examine the relationship between the technological and the social aspects of ubiquitous computing to understand how they are entwined. We'll consider not just the HCI aspects of ubiquitous computing, but also the political, economic, social, and cultural components that frame what ubiquitous computing has been, is now, and might be, and to provide an understanding of currently active research topics in ubicomp that take an interdisciplinary view.

    References

    Bio. Paul Dourish is a Professor of Informatics in the Donald Bren School of Information and Computer Sciences at UC Irvine, with courtesy appointments in Computer Science and Anthropology, and co-directs the Intel Science and Technology Center for Social Computing. His research focuses primarily on understanding information technology as a site of social and cultural production; his work combines topics in human-computer interaction, ubiquitous computing, and science and technology studies. He has published over 100 scholarly articles, and was elected to the CHI Academy in 2008 in recognition of his contributions to Human-Computer Interaction.Before coming to UCI, he was a Senior Member of Research Staff in the Computer Science Laboratory of Xerox PARC; he has also held research positions at Apple Computer and at Rank Xerox EuroPARC. He holds a Ph.D. in Computer Science from University College, London, and a B.Sc. (Hons) in Artificial Intelligence and Computer Science from the University of Edinburgh.



    Richard M. Fujimoto, Georgia Tech, [introductory] Parallel and Distributed Simulation

    Fujimoto

    Summary. This course will give an overview of the parallel and distributed simulation field, dating back to its roots in the late 1970’s and 1980’s up through applications and current research topics in the field today. The emphasis of the course is on discrete event simulations that are used in a variety of applications such as modeling computer communication networks, manufacturing systems, supply chains, transportation and other urban infrastructures, to mention a few. The course will also cover topics related to distributed virtual environments used for entertainment and training. The course focuses on issues arising in the execution of discrete event simulation programs on parallel computers ranging from multicore desktop machines to supercomputers, as well as workstations connected through local area and wide area networks. This series of lectures will cover the fundamental principles and underlying algorithms used in parallel and distributed simulation systems as well as applications and recent advances. Distributed simulation standards such as the High Level Architecture (HLA) will be discussed that enable separately developed simulations to interoperate. The technical underpinnings and algorithms required to implement distributed simulation systems using the HLA will be presented. A general computing background is assumed, but no prior knowledge of simulation is required.

    References

    Bio. Dr. Richard Fujimoto is Regents’ Professor and founding Chair of the School of Computational Science and Engineering at the Georgia Institute of Technology since 2005. He is currently the Interim Director of the Institute for Data and High Performance Computing at Georgia Tech. He received the Ph.D. and M.S. degrees from the University of California (Berkeley) in 1980 and 1983 in Computer Science and Electrical Engineering and B.S. degrees from the University of Illinois (Urbana) in 1977 and 1978 in Computer Science and Computer Engineering.  He has been an active researcher in the parallel and distributed simulation community since 1985. He led the definition of the time management services for the High Level Architecture (IEEE Standard 1516).  Fujimoto has served as Co-Editor-in-chief of the journal Simulation: Transactions of the Society for Modeling and Simulation International as well as a founding area editor for the ACM Transactions on Modeling and Computer Simulation journal.  He has served on the organizing committees for several leading conferences in the parallel and distributed simulation field.


    David Garlan, Carnegie Mellon, [advanced] Software Architecture: Past, Present and Future

    Garlan

    Summary. Over the past fifteen years there has been increasing understanding about the role that software architecture can and should play in mastering the complexity of software system design, providing a basis for early analysis and prediction, ensuring that systems retain their structural integrity over time, and enabling reuse and dramatic cost reductions. In this talk I outline some of the key insights that drive the field and consider some of the salient features of software architecture as they relate to improving the dependability of software-based systems, focusing on  techniques to  (a) express architectural descriptions precisely and unambiguously; (b) provide soundness criteria and tools to check consistency of architectural designs; (c) analyse those designs to determine implied system properties; (d) exploit patterns and styles, and check whether a given architecture conforms to a given pattern; (e) guarantee that the implementation of a system is consistent with its architectural design; and (f) support self-healing capabilities.

    References

    Bio. David Garlan is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University, where he has been on the faculty since 1990.  He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990.  His interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems.  He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.” In 2012 he was elected a Fellow of the IEEE.


    Mario Gerla, Los Angeles, [intermediate] Vehicle Cloud Computing

    Gerla

    Summary. Mobile Cloud Computing is a new field of research that aims to study mobile agents (people, vehicles, robots) as they interact and collaborate to sense the environment, process the data, propagate the results and more generally share resources. Mobile agents collectively operate as Mobile Clouds that enable environment modeling, content discovery, data collection and dissemination and other mobile applications in a way not possible, or not efficient, with conventional Internet Cloud models and mobile computing approaches. In this short course we address the vehicular cloud scenario and discuss design principles and research issues for the Vehicular Cloud. We propose a Vehicular Cloud architecture based on an Open Platform implementing the services required by most Vehicular applications. We discuss the interaction of the Vehicular Cloud with the Internet Cloud. We then demonstrate the role of the Vehicular Cloud in the deployment of popular vehicular applications ranging from urban sensing to content distribution and intelligent transportation.

    Lecture plan

    1. Internet Cloud and Mobile Cloud computing; the emergence of the Mobile Vehicular Cloud. The Open Cloud Mobile Platform
    2. Review of Vehicle to Vehicle and Vehicle to Infrastructure communications. Propagation models; spectrum sharing; the IEEE 802.p and DSRC standards
    3. The Vehicle Cloud Network layer service: IP networking vs Information Centric Networking. Review of ICN implementations for Vehicular Networks
    4. Content-based and user-centric mobile cloud security. Privacy preservation. Incentives.
    5. Collective sensing and crowd sourcing
    6. Proximity aware social networking
    7. Applications that leverage the Cloud: Intelligent Transport; Collective monitoring and surveillance; Content Distribution

    References

    Bio. Mario Gerla is a Professor in the Computer Science Dept at UCLA. He holds an Engineering degree from Politecnico di Milano, Italy and the Ph.D. degree from UCLA. He became IEEE Fellow in 2002. At UCLA, he was part of the team that developed the early ARPANET protocols under the guidance of Prof. Leonard Kleinrock. He joined the UCLA Faculty in 1976. At UCLA he has designed network protocols for ad hoc wireless networks (ODMRP and CODECast) and Internet transport (TCP Westwood). He has lead the ONR MINUTEMAN project, designing the next generation scalable airborne Internet for tactical and homeland defense scenarios. His team is developing a Vehicular Testbed for safe navigation, content distribution, urban sensing and intelligent transport. He serves on the IEEE TON Scientific Advisory Board. He was recently recognized with the annual MILCOM Technical Contribution Award (2011) and the IEEE Ad Hoc and Sensor Network Society Achievement Award (2011).


    Ralph Grishman, New York, [intermediate] Information Extraction from Natural Language

    Grishman

    Summary. Information extraction is the process of creating semantically structured information from unstructured natural language text. We will present methods for identifying and classifying names and other textual references to entities; for capturing semantic relations; and for recognizing events and their arguments. We will consider hand-coded rules and various machine learning approaches, including fully-supervised learning, semi-supervised learning, and distant supervision. (Basic machine learning concepts will be reviewed, but some prior acquaintance with machine learning methods or corpus-trained language models will be helpful.) Several application domains will be briefly described. Detailed notes (with extensive references) for an earlier version of this course, presented in Tarragona in early 2011, can be found here; updated notes will be made available to course attendees.

    Bio. Ralph Grishman is Professor of Computer Science at New York University, and served as chair of the department from 1986 to 1988. He has been involved in research in natural language processing since 1969. He served for a year (1982-83) as project leader in natural language processing at the Navy Center for Applied Research in Artificial Intelligence. Since 1985 he has directed the Proteus Project under funding from DARPA, NSF, IARPA, and other U.S. Government agencies, focusing on research in information extraction. In 2010 and 2011 he served as co-coordinator of the information extraction evaluation ("Knowledge Base Population") conducted by the U.S. National Institute of Standards and Technology. He is a past president of the Association for Computational Linguistics and author of the text Computational Linguistics: An Introduction (Cambridge Univ. Press).


    Francisco Herrera, Granada, [intermediate] Imbalanced Classification: Current Approaches and Open Problems

    Herrera

    Summary. Classifier learning with data-sets which suffer from imbalanced class distributions is an important problem in data mining. This issue occurs when the number of examples representing one class is much lower than the ones of the other classes. Its presence in many real-world applications has brought along a growth of attention from researchers.

    The aim of this talks is to shortly review the two common approaches to dealing with imbalance, sampling and cost sensitive, and the use of ensemble techniques based on the mentioned approaches.

    We will pay special attention to some open problems, in particular we will discuss the multi-class imbalance problems, imbalanced bigdata, the overlapping between the classes and the data fracture between training and test distribution (dataset shift) that provokes the bad behavior of classifiers.

    References

    Bio. Francisco Herrera received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain.

    He is currently a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada. He has been the doctoral advisor of 28 Ph.D. students. He has published more than 240 papers in international journals, and he is coauthor of the book Genetic Fuzzy Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases (World Scientific, 2001).

    He currently acts as Editor in Chief of the international journal Progress in Artificial Intelligence (Springer). He acts as area editor of the International Journal of Computational Intelligence Systems and associated editor of the journals: IEEE Transactions on Fuzzy Systems, Information Sciences, Knowledge and Information Systems, Advances in Fuzzy Systems, and International Journal of Applied Metaheuristics Computing; and he serves as member of several journal editorial boards, among others: Fuzzy Sets and Systems, Applied Intelligence, Information Fusion, Evolutionary Intelligence, International Journal of Hybrid Intelligent Systems, Memetic Computation, and Swarm and Evolutionary Computation.

    He received the following honors and awards: ECCAI Fellow 2009, 2010 Spanish National Award on Computer Science ARITMEL to the Spanish Engineer on Computer Science, International Cajastur "Mamdani" Prize for Soft Computing (Fourth Edition, 2010), IEEE Transactions on Fuzzy System Outstanding 2008 Paper Award (bestowed in 2011), and 2011 Lotfi A. Zadeh Prize Best paper Award of the International Fuzzy Systems Association.

    His current research interests include computing with words and decision making, bibliometrics, data mining, data preparation, instance selection, fuzzy rule based systems, genetic fuzzy systems, knowledge extraction based on evolutionary algorithms, memetic algorithms and genetic algorithms.


    Paul Hudak, Yale, [introductory] Euterpea: From Signals to Symphonies Using Haskell

    Hudak

    Summary. This course teaches functional programming in Haskell, but in the context of computer music, instead of the usual numbers, characters, strings, etc.  No background in music theory is required, although basic programming skills are assumed.  Many advanced programming techniques, including higher-order functions, algebraic data types, pattern-matching, type classes, lazy evaluation, infinite data types, monads, arrows, and functional reactive programming, will be covered.  From a music perspective, basic ideas in algorithmic composition as well as audio processing and sound synthesis will be discussed.  Euterpea is a Haskell library for computer music applications, which students will be able to download and run programs during the course of the lectures.

    References

    Bio. Paul Hudak is a Professor of Computer Science at Yale University.  He received a BE from Vanderbilt in 1973, MS from MIT in 1974, and PhD from the University of Utah in 1982.  He has been on the Yale faculty since 1982, and was Chairman from 1999-2005.  Professor Hudak helped to organize and chair the Haskell Committee, was co-Editor of the first Haskell Report in 1990, and has written a popular Haskell textbook.  He has been a leader in the design of domain specific languages, with a focus most recently on computer music and audio processing.  With two of his colleagues, he established the new Computing and the Arts major at Yale in 2009.  Professor Hudak is also an ACM Fellow.  In 2009 he was appointed Master of Saybrook College at Yale University.


    Niraj K. Jha, Princeton, [intermediate] FinFET Circuit Design

    Jha

    Summary. The constant march of miniaturization of transistors with each new generation of bulk CMOS technology has resulted in significant improvements in digital circuit performance. Further scaling of bulk CMOS, however, faces significant challenges due to fundamental material and process technology limits, including short-channel effects, sub-threshold leakage, and device-to-device variations. Thus, the semiconductor industry is transitioning to FinFETs to overcome these obstacles to scaling.

    We will explore the novel circuit design opportunities made possible by the double-gate nature of FinFETs. FinFETs can be implemented in various styles, e.g., shorted-gate, independent-gate, and asymmetric-workfunction. They give rise to optimization opportunities in a much richer area-delay-power-yield design space than hitherto possible in bulk CMOS. For example, more than a dozen implementations of a NAND gate become possible, including one that has only two transistors, which offer various tradeoffs in the above space. The richer design library then makes novel circuit design possible. Just reimplementing CMOS circuits in the FinFET technology may be quite sub-optimal. We will present numerous examples to buttress this point. Although the impact of process-voltage-temperature variations is reduced in FinFETs, it does not go away. Thus, we will look at how FinFET logic circuits can be analyzed under such variations. New circuit designs also change the tradeoffs at the architecture level. We will also discuss how circuit-architecture co-optimization can lead to innovative architectures.

    Bio. Niraj K. Jha received his B.Tech. degree in Electronics and Electrical Communication Engineering from Indian Institute of Technology, Kharagpur, India in 1981 and Ph.D. degree in Electrical Engineering from University of Illinois at Urbana-Champaign in 1985. He is a Professor of Electrical Engineering at Princeton University. He is a Fellow of IEEE and ACM. He has served as the Director of the Center for Embedded System-on-a-chip Design funded by New Jersey Commission on Science and Technology. Two textbooks he co-authored titled ``Testing of Digital Systems" and ``Switching and Finite Automata Theory, 3rd ed.” are being widely used around the world. He has served as the editor-in-chief of IEEE Transactions on VLSI Systems and on the editorial boards of IEEE Transactions on Computer-Aided Design, IEEE Transactions on Computers, IEEE Transactions on Circuits and Systems I and II, and various other journals. He is the author or co-author of close to 400 papers. He has co-authored 14 papers that have won various awards. His research interests include FinFETs, embedded systems, power/thermal aware hardware/software design, computer-aided design of ICs and systems, computer security, quantum computing and digital system testing.



    George Karypis, Minnesota, [introductory] Introduction to Parallel Computing: Architectures, Algorithms, and Programming

    Karypis

    Summary. This is an introductory course on parallel computing. It will provide an overview of the different types of parallel architectures, it will present various principles associated with decomposing various computational problems towards the goal of deriving efficient parallel algorithms, it will present and analyze various efficient parallel algorithms for a wide-range of problems (e.g., sorting, dense/sparse linear algebra, graph algorithms, discrete optimization problems), and it will discuss various popular parallel programming paradigms and APIs (e.g., message passing interface (MPI), openMP, Posix threads, and CUDA). The course combines theory (complexity of parallel algorithms) with practical issues such as parallel architectures and parallel programming.

    References

    Bio. George Karypis is a Professor of Computer Science in the Department of Computer Science & Engineering at the University of Minnesota. His research interests span the areas of data mining, bioinformatics, cheminformatics, high performance computing, information retrieval, collaborative filtering, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 210 papers on these topics and two books. In addition, he is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology.



    Arie E. Kaufman, Stony Brook, [advanced] Advances in Visualization

    Kaufman

    Summary. Visualization is the discipline that creates imagery from typically big and complex data for the purpose of exploring the data. It is the method of choice to solve complex problems in practically every discipline, from science and engineering to business and medicine. Three applications will be presented in detail along with the corresponding visualization techniques: (1) medical visualization, (2) visual simulation of flow, and (3) immersive visualization of big data. In medical applications the focus is on 3D virtual colonoscopy for colon cancer screening. A 3D model of the colon is automatically reconstructed from a CT scan of the patient’s abdomen. The physician then interactively navigates through the volume-rendered virtual colon to locate colonic polyps and consults a system of computer-aided detection (CAD) of polyps. In visual simulation, the focus is on the Lattice Boltzmann Method (LBM) for real-time simulation and visualization of flow.  LBM has been accelerated on commodity graphics processing units (GPUs), achieving accelerated real-time on a single GPU or on a GPU cluster. In immersive visualization, the data is effectively engulfing the user. A custom-built 5-wall Cave environment, the Immersive Cabin, is presented, along with a conformal deformation rendering method for visualizing data on partially-immersive platforms. To ameliorate the challenge of big data, the Reality Deck is presented. It is a unique 1.5-billion-pixel immersive display, tiling of 416 high-res panels driven by an 80-GPU cluster.

    Bio. Arie E. Kaufman is a Distinguished Professor and Chairman of the Computer Science Department, Director of the Center of Visual Computing (CVC), and Chief Scientist of the Center of Excellence in Wireless and Information Technology (CEWIT) at Stony Brook University (aka the State University of New York at Stony Brook). He has conducted research for over 40 years in computer graphics and visualization and their applications, has published more than 300 refereed papers, books, and chapters, has delivered more than 20 invited keynote/plenary talks, has been awarded/filed more than 40 patents, and has been a principal/co-principal investigator on more than 100 research grants. He is a Fellow of IEEE, a Fellow of ACM, and the recipient of the IEEE Visualization Career Award (2005) as well as numerous other awards. He was the founding Editor-in-Chief of the IEEE Transaction on Visualization and Computer Graphics (TVCG), 1995-1998. He has been the co-founder/papers co-chair of IEEE Visualization Conferences, Volume Graphics Workshops, Eurographics/SIGGRAPH Graphics Hardware Workshops, and ACM Volume Visualization Symposia. He previously chaired and is currently a director of IEEE CS Technical Committee on Visualization and Graphics. He received a BS in Math/Physics from the Hebrew University, MS in Computer Science from the Weizmann Institute, and PhD in Computer Science from the Ben-Gurion University, Israel. Click here for more information



    Hugo Krawczyk, IBM Research, [intermediate] An Introduction to the Design and Analysis of Authenticated Key Exchange Protocols

    Krawczyk

    Summary. Authenticated key-exchange (AKE) protocols are cryptographic mechanisms by which two parties that communicate over an adversarially-controlled network can generate a shared secret key and be assured that no one other than the intended partner to the communication learns that key. AKE protocols are an essential component of secure communications as they enable the use of efficient symmetric-key techniques (encryption and authentication) to protect bulk communication over insecure channels (e.g., the Internet). In particular, AKE protocols are the most important class of cryptographic protocols in use today with TLS, IPsec and SSH being examples of prime applications whose security fully depends on the underlying AKE protocol.

    While the functionality and security requirements of AKE protocols is intuitive and simple, the design of such protocols has proven highly non-trivial with numerous examples of broken protocols. The very formalization of security for these protocols has been challenging and is still an active area of research. In this short course we will cover basic principles of design and analysis of AKE protocols with examples from real-world applications, including IPsec's Key Exchange (IKE) and TLS handshake, and of more advanced protocols. The intention is to use AKE protocols as a window into the challenging world of crypto-protocol design and analysis.

    Bio. Hugo Krawczyk is a member of the Cryptography Group at the IBM T.J. Watson Research Center, where he has been a research staff member between 1991 and 2000 and from 2002 to this date. Between 1997 and 2004 he was also an Associate Professor at the Department of Electrical Engineering at the Technion, Israel. Dr. Krawczyk holds a Ph.D. in Computer Science from the Technion. His areas of interest span theoretical and applied aspects of cryptography with particular emphasis on applications to network security. Best known are his contributions to the cryptographic design of numerous Internet standards, including IPsec, IKE and TLS. He is the inventor or co-inventor of many cryptographic protocols and algorithms, including the HMAC, UMAC, shrinking generator, HKDF and RMX algorithms, and the SIGMA, SKEME, HMQV and iKP protocols. On the theoretical side, Dr. Krawczyk has made contributions to the theory of cryptography in areas such as pseudorandomness, zero-knowledge, key exchange, and threshold and proactive cryptosystems. He is an Associate Editor of the Journal of Cryptology and the recipient of numerous IBM awards for his contributions to industry.


    Pierre L'Ecuyer, Montréal, [intermediate] Quasi-Monte Carlo Methods in Simulation: Theory and Practice

    L'Ecuyer

    Summary. Monte Carlo (MC) simulation methods are widely used in practically all areas of science, engineering, and management. An elementary version of MC estimates a mathematical expectation, viewed as an integral over the unit hypercube [0; 1)s in s dimensions, by averaging the values of the integrand at n independent random points in [0; 1)s. Randomized quasi-Monte Carlo (RQMC) replaces these n independent points by a highly-uniform point set that covers the space more evenly, while each point remains uniformly distributed in [0; 1)s. This provides an unbiased estimator which, under appropriate conditions, is much more accurate (less noisy) than MC. Variants of RQMC have been designed for the simulation of Markov chains, for function approximation and optimization, and for solving partial differential equations. Highly successful applications are found in computational _nance, statistics, physics, and computer graphics, for example. Huge variance reductions occur mostly when the integrand can be well approximated by a sum of low-dimensional functions, often after a clever (problem-dependent) multivariate change of variable.

    In this short course, we will summarize the main ideas and results on RQMC methods, discuss their theoretical and practical aspects, and give several examples. Key issues include: How should we measure the uniformity of the point sets and how should we weight the measures for the lower-dimensional projections? How should we randomize these points? Are there relevant and easily computable measures that do not depend on the randomization? How do we construct good RQMC point sets with respect to those measures? Can we obtain explicit variance expressions under such randomizations? How does this variance, or the worst-case error, or their average over certain spaces of functions, do converge as the number of points increases to infinity? How can we compute confidence intervals for RQMC estimators? How can we extend point sets on the y by adding new points or new coordinates? What software is currently available for RQMC?

    References

    Bio. Pierre L'Ecuyer holds the Canada Research Chair in Stochastic Simulation and Optimization, in the D_epartement d'Informatique et de Recherche Operationnelle at the Université de Montréal. He is a member of the CIRRELT and GERAD research centers. He has published over 230 scientific articles and book chapters in various areas, including random number generation, quasi-Monte Carlo methods, e_ciency improvement in simulation, sensitivity analysis and optimization for discrete-event simulation models, simulation software, stochastic dynamic programming, and applications in _nance, manufacturing, communications, reliability, and service center management. He also developed software libraries and systems for the analysis of random number generators and quasi-Monte Carlo point sets, and for general discrete-event simulation. He is currently Editor-in-Chief for the ACM Transactions on Modeling and Computer Simulation, and Associate Editor for ACM Transactions on Mathematical Software, Statistics and Computing, Cryptography and Communications, and International Transactions in Operational Research. He has been a referee for over 120 different scientific journals.


    Wenke Lee, Georgia Tech, [introductory] DNS-based Monitoring of Malware Activities

    Lee

    Summary. Malicious overlay networks such as botnets often use the domain name systems (DNS) to support command-and-control (C&C) and their malicious and fraudulent activities. By monitoring DNS look-up traffic and detecting the domains used by botnets, we can identify the infected machines (i.e., the bots) and take appropriate actions such as blocking traffic to these domains or even "taking down" the domains. However, botnet operators have also developed a number of techniques to make their C&C infrastructures very stealth and agile to resist detection and take-down. For example, botnets now use many backup/new domains on a daily basis, and these domains can even appear to be "randomly generated". We will describe several new approaches to address these challenges. In particular, we can use dynamic reputation of DNS domains to identify a new domain as suspicious, i.e., likely to be used by botnets even before it is activated, because its network providers or hosting servers have historically been associated with botnet activities. We also use a clustering algorithm to identify look-up traffic to randomly generated botnet domains based on the observation that a large portion of such traffic results in NXDOMAIN responses that look similar across bots of the same botnet. We will conclude the short course by discussing guidelines and procedures for botnet take-down.

    Bio. Wenke Lee received his Ph.D. in Computer Science from Columbia University in 1999. He is currently a Professor of Computer Science in the Georgia Institute of Technology. He also serves the Director of the Georgia Tech Information Security Center (GTISC). His research interests are mainly systems and network security, and his recent research projects are in botnet detection and response, secure software execution environments, mobile security, and information manipulation. He co-founded Damballa, Inc. network security company that focuses on anti-botnet technologies in 2006. He is the program committee chair for the 2013 IEEE Symposium on Security and Privacy.


    Maurizio Lenzerini, Roma La Sapienza, [intermediate] Ontology-based Data Integration

    Lenzerini

    Summary. The need of effectively managing the data sources of an organization, which are often autonomous, distributed, and heterogeneous, and devising tools for deriving useful information and knowledge from them is widely recognized as one of the challenging issues in modern information systems. Ontology-based data management aims at accessing, using, and maintaining data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. This new paradigm provides several interesting features, many of which have been already proved effective in managing complex information systems. In the course, we first provide an introduction to ontology-based data management, then we illustrate the main techniques for using an ontology to access the data layer of an information system, and finally we discuss several important issues that are still the subject of extensive investigations, including updates, inconsistency tolerance, and query optimization.

    References

    Bio. Maurizio Lenzerini is a professor in Computer Science and Engineering at the Università di Roma La Sapienza, Italy, where he is currently leading a research group on Artificial Intelligence and Databases. His main research interests are in Knowledge Representation and Reasoning, Ontology languages, Semantic Data Integration, and Service Modeling. His recent work is mainly oriented towards the use of Knowledge Representation and Automated Reasoning principles and techniques in Information System management, and in particular in information integration and service composition. He has authored over 250 papers published in leading international journals and conferences. He has served on the editorial boards of several international journals, and on the program committees of the most prestigious conferences in the areas of interest. He is currently the Chair of the Executive Committee of the ACM Symposium of Principles of Database Systems, a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI), a Fellow of the Association for Computing Machinery (ACM), and a member of The Academia Europaea - The Academy of Europe.


    Ming C. Lin, Chapel Hill, [introductory/intermediate] Physically-based Modeling and Simulation

    Lin

    Summary. Physically-based modeling and simulation attempts to map a natural phenomena to a computer simulation program. There are two basic processes in this mapping: mathematical modeling and numerical solution. The goal of this introductory course is to understand both of them. Themathematical modeling concerns the description of natural phenomena by mathematical equations. Differential equations that govern dynamics and geometric representation of objects are typical ingredients of the mathematical model. The numerical solution involves computing an efficient and accurate solution of the mathematical equations. Finite precision of numbers, limited computational power and memory forces us to approximate the mathematical model with simple procedures.

    Syllabus. In this course, we will study various techniques to simulate the physical and mechanical behavior of objects in a graphical simulation or a virtual environment. Students will learn about implementation of basic simulation programs that produce interesting results and verify its correctness. The course will cover three basic components in physically-based modeling and simulation:

    The goal of this class is to get students an appreciation of computational methods for modeling of motions in the physical and virtual world. We will discuss various considerations and tradeoffs used in designing simulation methodologies (e.g. time, space, robustness, and generality). This will include data structures, algorithms, computational methods and simulation techniques, their complexity and implementation. The lectures will also cover some applications of physically-based modeling and simulation to the following areas:

    Depending on the interests of the students, we may also cover geometric-based simulation techniques, such as constraint-based systems, inverse dynamics, kinematics of motions, motion planning, synthesis and generation of autonomous agents.

    Pre-requisites. Basic knowledge in graphics programming and numerical methods.

    Bio. Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and eight best paper awards at international conferences. She is a Fellow of ACM and IEEE.

    Her research interests include physically-based modeling, virtual environments, sound rendering, haptics, robotics, and geometric computing. She has (co-)authored more than 240 refereed publications in these areas and co-edited/authored four books. She has served on over 120 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently the Editor-in-Chief (EIC) of IEEE Transactions on Visualization and Computer Graphics, a member of 6 editorial boards, and a guest editor for over a dozen of scientific journals and technical magazines. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.


    Jane W.S. Liu, Academia Sinica, [intermediate] Critical Information and Communication Technologies for Disaster Preparedness and Response

    Liu

    Summary. Recent years have brought tremendous advances in information and communication technologies (ICT) needed to support disaster preparedness and response decisions and operations. The first part of this lecture will present an overview of the state of the art and remaining technology gaps. Topics to be covered include standards and tools for interoperability of diverse sensors systems and web services, open platforms for delivering standard-based warning messages to emergency alert systems, tools and middleware for exploiting linked open data and semantic web technologies to facilitate discovery and use of data in diverse information sources, break-the-glass access control and information brokerage for responsive information flow during emergencies, and methods for fusing synergistically data from physical sensors and social sensors by crowdsourcing enhanced surveillance and early warning systems.

    The second and third parts of this lecture will focus on ubiquitous smart devices and applications for disaster preparedness and early response and disaster resilient networks and transport services, respectively. The lecture will present approaches and methodologies for leveraging ICT to enhance disaster preparedness of our living environments and to help us cope with major disasters when they strike. For motivation, opportunities and challenges, see the references.

    References

    Bio. Jane W.S. Liu is a Distinguished Visiting Fellow of Institute of Information Science and Research Center for Information Technology Innovations of Academia Sinica, Taiwan. Before joining Academia Sinica in 2004, she was a software architect in Microsoft Corporation from 2000 to 2004 and a faculty member of Computer Science Department at the University of Illinois at Urbana-Champaign from 1972 to 2000. Her research interests are in the areas of real-time and embedded systems. In addition to journal and conference publications, she has also published two text books, one on real-time systems and the other on signals and systems. Her recent research focuses on technologies for building user-centric automation and assistive devices and services and for enhancing disaster preparedness and early response capabilities of disaster management ICT infrastructures. She received the Achievement and Leadership Award of IEEE Computer Society, Technical Committee on Real-Time Systems in 2005; Information Science Honorary Medal of Taiwan Institute of Information and Computing Machinery in 2008 and Linux Golden Penguin Award for special contributions of Taiwan Linux Consortium in 2009. She is a fellow of IEEE.


    Satoru Miyano, Tokyo, [intermediate] How to Hack Cancer Systems with Computational Methods

    Miyano

    Summary. Cancer is a very complex disease that occurs from accumulation of multiple genetic and epigenetic changes in individuals who carry different genetic backgrounds and have suffered from distinct carcinogen exposures. These changes affect various pathways which are necessary for normal biological activities and gene networks are driving these pathways in disorder in their center. Therefore, development of a systematic methodology for unraveling gene networks and their diversity lying over genetic variations, mutations, environments and diseases remains as a big challenge. In this talk, we present our challenge for uncovering systems in cancer by supercomputer. SiGN-L1 (NetworkProfiler) (based on L1-regularizaon) is a method that will exhibit how gene networks vary from patient to patient according to a modulator, which is any score representing characteristics of cells. We defined an EMT (epithelial-mesenchymal transition) modulator and analyzed gene expression profiles of 762 cancer cell lines. The computation took 3 weeks on 1024 CPU cores. Network analysis unraveled global changes of networks with 13,508 genes of different EMT levels. By focusing on E-cadherin, 24 genes were predicted as its regulator, of which 12 have been reported in the literature. A novel EMT regulator KLF5 was also discovered in this study. We also analyzed Erlotinib resistant networks using 160 NSCLCs with GI50 as a modulator. Hubness analysis exhibited that NKX2-1/TTF-1 is the key gene for Erlotinib resistance in NSCLCs. Our microRNA/mRNA gene network analysis with Bayesian network method called SiGN also revealed subnetworks with hub genes (including NKX2-1/TTF-1) that may switch cancer survival. The supercomputer was also applied for modeling dynamics in cancer cells from time-course gene expression profiles and revealed dynamic network changes against anti-cancer drugs and network differences between drug-sensitive and drug-resistant cancer cells. For dynamic system modeling, we devised a state space model (SSM) with dimension reduction method for reverse-engineering gene networks from time-course data, with which we can view their dynamic changes over time by simulation. We succeeded in computing a gene network with prediction ability focused on 1500 genes from data of about 20 time-points. We applied this SSM model to human normal lung cell treated with (case)/without (control) Gefitinib, and we identified genes under differential regulations between case and control. This signature of genes was used to predict prognosis for lung cancer patients and showed a good performance for survival prediction.

    References

    Bio. Satoru Miyano, Ph.D., is a Professor of Human Genome Center, The Institute of Medical Science, The University of Tokyo. He received the B.S., M.S. and Ph.D. all in Mathematics from Kyushu University, Japan, in 1977, 1979 and 1984, respectively. He joined Human Genome Center in 1996. His research mission is to create computational strategy for systems biology and translational bioinformatics for personalized genomic medicine. He served as the President,of Japanese Society for Bioinformatics (April 2002-March 2004), Board of Directors, International Society for Computational Biology (2004-2006), and the President, Association of Asian Societies for Bioinformatics (2003-2004, 2009). He is serving on the Editorial Boards of PLoS Computational Biology (Associate Editor), IEEE/ACM Trans. Computational Biology & Bioinformatics (Associate Editor), and Journal of Bioinformatics and Computational Biology (Editor), etc. He also served as an Associate Editor of Bioinformatics (2002-2006).


    Aloysius K. Mok, Austin, [introductory/advanced] From Real-time Systems to Cyber-physical Systems

    Mok

    Summary. The first part of this class presents a summary of research issues and results in real-time systems research over the last 30 years, covering the major topics of specification techniques, validation and verification, real-time scheduling and system design, and application issues such as real-time databases and rule-based systems. We shall discuss some of the major theoretical ideas and techniques for designing hard real-time systems and the problems of applying these techniques to current and future applications, in particular the divergence between real-time application needs and commercial hardware development that favors average-time performance improvement over worst-case performance guarantees. The result is a “predictability crisis” that requires new ways to approach certifiable real-time system, such as mixed criticality systems and real-time virtual resources. The second part of this class discusses the emerging research area of CPS (Cyber-Physical Systems) and its lineage from real-time systems. We shall discuss some new challenges posed by CPS including failure semantics, open system abstraction and system security.

    Bio. Dr. Aloysius K. Mok is Quincy Lee Centennial Professor in Computer Science at the University of Texas at Austin. Professor Mok is best known for his work in real-time systems and is a past Chairman of the Technical Committee on Real-Time Systems of the Institute of Electrical and Electronics Engineers. Dr. Mok has served on numerous national research and advisory panels and received in 2002 the IEEE TC on Real-Time Systems Award for his outstanding technical contributions and leadership achievements in real-time systems. His current interests include real-time, cyber-physical systems and wireless networking for industrial automation. He is the Chairman of Awiatech Corporation, an Austin-based company that specializes in wireless industrial automation technology. Dr. Mok received all his academic degrees in electrical engineering and computer science from the Massachusetts Institute of Technology.



    Hermann Ney, Aachen, [intermediate/advanced] Probabilistic Modelling for Natural Language Processing - with Applications to Speech Recognition, Handwriting Recognition and Machine Translation

    Ney

    Summary. The last 25 years have seen a dramatic progress in statistical methods for recognizing speech signals and for translating spoken and written language. This lecture gives an overview of the underlying statistical methods. In particular, the lecture will focus on the remarkable fact that, for these tasks and similar tasks like handwriting recognition, the statistical approach makes use of the same four principles:

    1. Bayes decision rule for minimum error rate;
    2. probabilistic models, e.g. Hidden Markov models or conditional random fields for handling strings of observations (like acoustic vectors for speech recognition and written words for language translation);
    3. training criteria and algorithms for estimating the free model parameters from large amounts of data;
    4. the generation or search process that generates the recognition or translation result.

    Most of these methods had originally been designed for speech recognition. However, it has turned out that, with suitable modifications, the same concepts carry over to language translation and other tasks in natural language processing.  This lecture will summarize the achievements and the open problems in this field.

    Bio. Hermann Ney is a full professor of computer science at RWTH Aachen University in Aachen, Germany. His research interests lie in the area of statistical methods for pattern recognition and human language technology and their specific applications to speech recognition, machine translation and image object recognition. In particular, he has worked on dynamic programming and discriminative training for speech recognition, on language modelling and on phrase-based approaches to machine translation. His work has resulted in more than 600 conference and journal papers (H-index 72, estimated using Google scholar). He is a fellow of both the IEEE and of the International Speech Communication Association. In 2005, he was the recipient of the Technical Achievement Award of the IEEE Signal Processing Society. For the years 2010-2013, he was awarded a senior DIGITEO chair at LIMIS/CNRS in Paris, France.



    Jeff Offutt, George Mason, [intermediate] Cutting Edge Research in Engineering of Web Applications

    Offutt

    Summary. This course will prepare students to do cutting edge software engineering research related to web applications. When I first taught a class in engineering web applications in 1999, the goal was to prepare students for the exploding job market in developing web software. I quickly found out that in software engineering research, as in other areas, the web changes everything. It took several years to understand precisely why web apps are so different, at a deep analytical level, in terms of novel aspects of web software. This course will start with four key differences.

    1. The web offers a different deployment platform. Instead of being pre-installed on computers (bundled), sold in a form that users can install (shrink-wrap), embedded in hardware devices, or custom-built (contract), web applications reside primarily on servers away from users and are accessed through URLs. They are by nature distributed across multiple servers (and the user's computers), heterogeneous in terms of the software languages being used, component-based, and make heavy use of parallelism and concurrency.
    2. Web applications feature extremely low coupling—software components often do not share a memory space or run under the same process.
    3. Web application programming languages and platforms introduce brand new language control structures, such as redirect, forward, and URL rewriting.
    4. New state management techniques—web software components do not share memory space, so traditional variable scoping techniques such as public, protected, and parameter passing are insufficient. Thus, web app programming languages introduce new scopes such as the request scope, the session scope, and the application scope.
    The course will start by explaining these novel aspects of engineering web software in detail, with small but illustrative examples.

    The course will then describe how these unique aspects affect how we engineer software for the web, and more importantly, how they affect software engineering research. It is possible to update a web application daily, hourly, or even continuously, thus maintenance and evolution are drastically different. Likewise, the pre-existing software development processes simply do not work. Many web companies currently try to use agile processes, but most are a clumsy fit for web software. Standard modeling techniques such as the UML diagrams do not capture many of the new control connections and state management techniques, making design modeling quite different for this kind of software. Likewise, most software analysis techniques no longer apply. For example, the fundamental control flow graph cannot represent all possible control flows in a web app. Without adequate models and analysis techniques, many traditional testing techniques no longer apply. This is exacerbated because web applications exhibit low controllability and observability.

    Students will leave this short course with a deep understanding of the research issues in engineering of web applications, and a long list of open problems in the area.

    References

    Pre-requisites. Solid programming skills. Experience with programming at least one web application will be very helpful, although not required.

    Bio. Dr. Jeff Offutt is Professor of Software Engineering at George Mason University. He received his PhD degree from the Georgia Institute of Technology. He has part-time visiting faculty positions at the University of Skövde, Sweden, and at Linköping University, Linköping Sweden. He has published over 145 refereed research papers (h-index of 50), and is co-author of Introduction to Software Testing. He is editor-in-chief of Wiley's journal of Software Testing, Verification and Reliability; co-founded the IEEE International Conference on Software Testing, Verification, and Validation; and was its founding steering committee chair. He has done pioneering work with mutation tetsing, including being the primary developer for the Mothra testing environment and leading the team that created muJava, a widely used research tool for mutation testing. He is also credited with seminal work in model-based testing, testing object-oriented software, input space partitioning, and testing web applications. He has consulted with numerous companies on software testing, usability, and software intellectual property issues.


    Bijan Parsia, Manchester, [introductory] The Semantic Web: Conceptual and Technical Foundations

    Parsia

    Summary. The Web is arguably  the most successful information systems in human history. From its humble origin as a simple hypertext system for sharing documents, it has evolved to a major computing platform with a disruptive effect on many industries and, indeed, human behavior. At any point, there are several typically competing, often complimentary visions of "the Next Web," including the Web of Services, the Web of Data (or of Linked Data), the Web of Applications. One distinctive vision of the "full potential" of the Web is the Semantic Web, usually conceived as a Web of Knowledge, that is, of rich representations of information so that programs can process it in ways comparably to that of human beings.

    While appealing and seemingly simple, this idea is hard to pin down. Some critics argue that the Semantic Web is "just another failed AI project" while some proponents claim that the Semantic Web is already here and growing. In this seminar, we will examine the conceptual foundations of the Semantic Web and the technologies that have been developed to support it. We will explore criteria for a distinctive notion of a Semantic Web Knowledge Representations and whether such is necessary to achieve the goals for a Semantic Web.

    Bio. Dr. Bijan Parsia is a Senior Lecturer at the School of Computer Science of the University of Manchester, UK. His primary area of research is logic based knowledge representation with an emphasis on ontology engineering. In addition to publishing on language features, explanation, modularity, difference, automated, etc. he was involved with the standardization at the W3C of the second version of the Web Ontology Language (OWL 2) and SPARQL, the query language for the Semantic Web. He was a designer and developer of the Swoop ontology editor and the Pellet OWL reasoner. 



    Charles E. Perkins, FutureWei, [intermediate/advanced] Beyond 4G

    Perkins

    Summary. Beyond LTE: the evolution of 4G networks: the need for higher performance handover system designs.

    To date, wireless network operators have been able to satisfy their customers with handover schemes evolved many years ago:

    But these restrictions are giving way to multi-radio, small-cell networks.

    Higher-layer solutions have been available for a while, including many variations of Mobile IP, as well as IEEE 802.21 Media Independent Handovers. These solutions have not been adopted by 3GPP, and thus are not currently contributing to the goals of higher-performance hancovers. There are many reasons for this, which are being discussed in a new IETF group known as [dmm], and a new proposal for IEEE 802-based technologies, known as OmniRAN. The whole problem is being exacerbated by the continuing demand for bandwidth to the wireless end stations, threatening to overloadthe existing mobile network infrastructure. Some predictions indicate that within two years the business model for today's wireless providers will begin to fail. This has motivated unusual efforts to shed capacity away from the mobile networks onto other existing media such as 802.11.

    There is no doubt that the wireless Internet will continue to grow; it seems improbable that today's mobility management solutions will suffice when handovers between dissimilar media start to occur much more rapidly, likely by an order of magnitude or more. Not surprisingly, a stumbling block for high performance involves the network operators' need for security and charging. Inserting a remote authentication step in a handover procedure effectively nullifies many other design choices, so it is important to consider strategies for preregistration and security context transfer.

    The opportunities for innovation in coming wireless network are both many and varied, and the economic benefits will reward those who find the right combinations of new technology and practicality. It is not yet known where the dividing lines will be drawn between link-layer solutions and network-layer solutions, but both design philosophies will be competing for attention and ease of operation may make the crucial difference. For those who believe that the wireless Internet is still in its infancy, these are very exciting times.

    Bio. Charles E. Perkins is a senior principal engineer at Futurewei, investigating mobile wireless networking and dynamic configuration protocols, in particular LTE and various IEEE and IETF efforts. He is serving as document editor for the 802.21 group of the IEEE, and is author or co-author of IETF standards-track documents in the dmm, mip4, mext, manet, dhc, and autoconf working groups. He is an editor for several journals in areas related to wireless networking. He has continued strong involvement with performance issues related to Internet access for billions of portable wireless devices as well as actitivies for ad hoc networking and scalability.

    Charles has authored and edited books on Mobile IP and Ad Hoc Networking, and has published a number of papers and award winning articles in the areas of mobile networking, ad-hoc networking, route optimization for mobile networking, resource discovery, and automatic configuration for mobile computers. Charles was also one of the creators of MobiHoc, the premier conference series that has provided the forum for many of the most important publications in the field of ad hoc networking; he remains on the steering committee for that conference. He has served as general chair and Program Committee chair for MobiHoc, MASS 2006, ICWUS 2010, and other conferences and workshops. Charles has served on the Internet Architecture Board (IAB) of the IETF and, at last count, has authored or co-authored at least 25 RFCs. He has made numerous inventions and been awarded dozens of patents; he was recently nominated for Inventor of the Year by the European Patent Office. He has served on various committees for the National Research Council, as well as numerous technical assessment boards for Army Research Lab and the Swiss MICS program. He has also served as associate editor for Mobile Communications and Computing Review, the official publication of ACM SIGMOBILE, and has served on the editorial staff for IEEE Internet Computing magazine.


    Prabhakar Raghavan, Google, [introductory/intermediate] Web Search and Advertising

    Raghavan

    Summary. In this course we will cover the basics of web search and advertising, highlighting the principal techniques, algorithms and systems. The student will be expected to have a good undergraduate computer science background, including familiarity with basic algorithms and data structures, as well as elementary probability and statistics, and linear algebra. Prior background in information retrieval is not required. There will be an optional programming exercise where we will build a search engine in python; familiarity with python will be useful but not required.

    Bio. Prabhakar Raghavan is a Vice President of Engineering at Google. Raghavan is the co-author of the textbooks Randomized Algorithms and Introduction to Information Retrieval; his research spans algorithms, web search and databases. He is a member of the National Academy of Engineering; a Fellow of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers (IEEE); and Consulting Professor of Computer Science at Stanford University. In 2009, he was awarded a Laurea honoris causa from the University of Bologna. From 2003 to 2009, Raghavan was the editor-in-chief of Journal of the ACM. He holds a Ph.D. from U.C. Berkeley in Electrical Engineering and Computer Science and a Bachelor of Technology from the Indian Institute of Technology, Madras. Prior to joining Google, he worked at Yahoo! Labs. Before that, Raghavan worked at IBM Research and later became senior vice president and chief technology officer at enterprise search vendor Verity.


    Phillip Rogaway, Davis, [introductory/intermediate] Provably Secure Symmetric Encryption

    Rogaway

    Summary. Once a tool only for making quite theoretical claims in cryptography, reductions have now become the central tool for the design and analysis of practical cryptographic schemes. This tutorial will explore the reduction-based approach for designing and analyzing blockcipher “modes of operation”—especially methods for achieving symmetric (that is, shared-key) encryption. We will explore a variety of formalizations for symmetric encryption, as well as means for provably achieving them, always starting from the assumed existence of a secure blockcipher. I will start with “classical” encryption (the goal known as “semantic security”) and move on to authenticated encryption and format-preserving encryption. While much of what I describe will be fairly classical, I will also include newer topics, like the FFX scheme for format-preserving encryption and the authenticated-encryption mode OCB3. While modern cryptography can be fairly technical, every attempt will be made to make the material self-contained and accessible.

    Bio. Phil is a cryptographer at the University of California, Davis. He did his undergraduate at UC Berkeley and his Ph.D. at MIT. Rogaway next worked at IBM as a Security Architect, where he became interested in the problem of developing foundations for cryptography that would be useful, and used, for cryptographic practice. In a body of work done in large part with Mihir Bellare, Rogaway co-developed what has been called practice-oriented provable security. Standardized cryptographic schemes that Rogaway co-invented include CMAC, DHIES, EME, FFX, OAEP, OCB, PSS, UMAC, and XTS. Rogaway is also interested in social and ethical issues surrounding technology, and regularly teaches a class on the subject.


    Gustavo Rossi, La Plata, [intermediate] Topics in Model Driven Web Engineering

    Rossi

    Summary. The main objective of this course is to review the fundamental concepts underlying Model-Driven Web Engineering and to present some new trends ongoing in this area. The most relevant topics in the course are the following:

    Bio. Gustavo Rossi is full professor at Facultad de Informática, Universidad Nacional de La Plata, Argentina and head of LIFIA research Laboratory. He holds a PhD degree from PUC-Rio, Brazil.

    He is one of the developers of the OOHDM approach for Web applications development and has published more than 100 papers in relevant conferences and journals on Web Engineering. He is one of the editors of Springer’s book: Web Engineering. Modeling and Implementing Web Applications.

    His current research interests include Agile approaches for Web applications development, Requirement Engineering for Web applications and Web applications adaptation.


    Kaushik Roy, Purdue, [introductory/intermediate] Low-energy Computing

    Roy

    Summary. Scaling of technology and higher integration density (and functionality) has led to large increase in power consumption in CMOS circuits. Today, both dynamic and leakage power consumption has become equally important. While for mobile applications, the leakage (static) energy consumption can severely affect battery life, the active (high temperature) leakage power consumption in high performance processors can account for 30-40% of the total power consumption. In this short course, I will first describe the different components of power consumption (leakage, dynamic, short-circuit) in CMOS circuits. Following that discussions, I will consider different techniques at the circuit and the architecture level to reduce leakage power consumption in both mobile and high performance processors; reduction of dynamic component of power  using Vdd scaling, dynamic Vdd, reduction of switched capacitance in large circuits, low-complexity computing for power reduction. I will also describe the recent trend towards low-energy designs with near-threshold computing. Note, that parameter variations have become very important in scaled technologies. Hence, I will consider both low-power logic and memory design considering parameter variations, leading to much better than worst-case design.

    Bio. Kaushik Roy received B.Tech. degree in electronics and electrical communications engineering from the Indian Institute of Technology, Kharagpur, India, and Ph.D. degree from the electrical and computer engineering department of the University of Illinois at Urbana-Champaign in 1990. He was with the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA architecture development and low-power circuit design. He joined the electrical and computer engineering faculty at Purdue University, West Lafayette, IN, in 1993, where he is currently a Professor and holds the Roscoe H. George Chair of Electrical & Computer Engineering. His research interests include Spintronics, device-circuit co-design for nano-scale Silicon and non-Silicon technologies, low-power electronics for portable computing and wireless communications, and new computing models enabled by emerging technologies. Dr. Roy has published more than 600 papers in refereed journals and conferences, holds 15 patents, graduated 56 PhD students, and is co-author of two books on Low Power CMOS VLSI Design (John Wiley & McGraw Hill).

    Dr. Roy received the National Science Foundation Career Development Award in 1995, IBM faculty partnership award, ATT/Lucent Foundation award, 2005 SRC Technical Excellence Award, SRC Inventors Award, Purdue College of Engineering Research Excellence Award, Humboldt Research Award in 2010, 2010 IEEE Circuits and Systems Society Technical Achievement Award, Distinguished Alumnus Award from Indian Institute of Technology (IIT), Kharagpur, and best paper awards at 1997 International Test Conference, IEEE 2000 International Symposium on Quality of IC Design, 2003 IEEE Latin American Test Workshop, 2003 IEEE Nano, 2004 IEEE International Conference on Computer Design, 2006 IEEE/ACM International Symposium on Low Power Electronics & Design, and 2005 IEEE Circuits and system society Outstanding Young Author Award (Chris Kim), 2006 IEEE Transactions on VLSI Systems best paper award, 2012 ACM/IEEE International Symposium on Low Power Electronics and Design best paper award. Dr. Roy was a Purdue University Faculty Scholar (1998-2003). He was a Research Visionary Board Member of Motorola Labs (2002) and held the M.K. Gandhi Distinguished Visiting faculty at Indian Institute of Technology (Bombay). He has been in the editorial board of IEEE Design and Test, IEEE Transactions on Circuits and Systems, IEEE Transactions on VLSI Systems, and IEEE Transactions on Electron Devices. He was Guest Editor for Special Issue on Low-Power VLSI in the IEEE Design and Test (1994) and IEEE Transactions on VLSI Systems (June 2000), IEE Proceedings -- Computers and Digital Techniques (July 2002), and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2011). Dr. Roy is a fellow of IEEE.


    Robert Sargent, Syracuse, [introductory] Validating Models

    Sargent

    Summary. This course will start off with a review of the different types of models followed by a clear definition of terms such as verification and validation. We will then proceed to discuss validation of system theories and models with special emphasis on the verification and validation of simulation models. The different approaches to deciding model validity will be described and different paradigms that relate verification and validation to the model development process will be presented. The various validation techniques will be defined. Conceptual model validity, model verification, operational validity, and data validity will be discussed in detail. Included in the discussion of operational validity will be the validation of models of both observable and non-observable systems and using both objective and subjective decision approaches. Also included in the use of statistical approaches for operational validity are the use of both theoretical reference distributions such as the t and F distributions and the use of data generated by a simulation model as the reference distribution. Documentation of validation results and a recommended procedure for model validation will be presented.

    References

    Bio. Dr. Robert G. Sargent is a Professor Emeritus of Syracuse University. At Syracuse, Dr. Sargent held appointments in different departments and interdisciplinary programs in the L. C. Smith College of Engineering and Computer Science and was Director of the Simulation Research Group. Professor Sargent received his education at The University of Michigan. Dr. Sargent’s current research interests include the methodology areas of modeling and discrete event simulation, model validation, and performance evaluation. Professor Sargent has made numerous research contributions in his career. He was one of the first individuals to initiate the modeling of computer systems for performance evaluation. Most of his research contributions have been in the methodology areas of simulation. Dr. Sargent is especially well known for his work in validation of simulation models. Professor Sargent has performed considerable professional service. He has received numerous awards for his various contributions.


    Douglas C. Schmidt, Vanderbilt, [intermediate] Patterns and Frameworks for Concurrent and Networked Software

    Schmidt

    Summary. This course focuses on pattern-oriented software architecture for concurrent and networked software.  Concurrent software can simultaneously run multiple computations that potentially interact with each other.  Networked defines protocols that enables computing devices to exchange messages and perform services remotely.  The principles, methods, and skills required to develop concurrent and networked software can be greatly enhanced by understanding how to create and apply patterns and frameworks.

    A pattern describes a reusable solution to a commonly occurring problem within a particular context.  When related patterns are woven together they form a pattern language that provides a vocabulary and a process for the orderly resolution of software development problems.A framework is an integrated set of software components that collaborate to provide a reusable architecture for a family of related applications.  Frameworks can also be viewed as concrete realizations of pattern languages that facilitate direct reuse of design and code.

    This course describes how to apply patterns and frameworks to alleviate many accidental and inherent complexities associated with developing and deploying concurrent and networked software.  We'll explore several case studies from the domains of mobile apps, web servers, object request brokers, and avionic control systems to showcase a time-tested pattern-oriented software design and programming method for concurrent and networked software.

    References

    Although the lectures are designed to be largely self-contained, it's recommended (but not required) that students refer to the following books:

    Much of this material is available online at http://www.dre.vanderbilt.edu/~schmidt/tutorials.html.

    Bio. Dr. Douglas C. Schmidt is a Professor of Computer Science, Associate Chair of the Computer Science and Engineering program, and a Senior Researcher at the Institute for Software Integrated Systems, all at Vanderbilt University.  Dr. Schmidt has published 10 books and more than 500 technical papers on software-related topics, including patterns, optimization techniques, and empirical analyses of object-oriented frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and mission-critical applications running over data networks and embedded system interconnects.  For the past two decades Dr. Schmidt has also led the development of ACE, TAO, and CIAO, which are pattern-oriented DRE middleware frameworks used successfully by thousands of projects worldwide in many domains, including national defense and security, datacom/telecom, financial services, medical engineering, and massively multiplayer online gaming. More info, here.


    Bart Selman, Cornell, [intermediate] Fast Large-scale Probabilistic and Logical Inference Methods

    Selman

    Summary. In recent years, we have seen tremendous progress in inference technologies, both in logical and probabilistic domains. Up till the mid nineties, inference beyond hundred variable problems appeared infeasible. Since then, we have witness a qualitative change in the field: current reasoning engines can handle problems with over a million variables and several millions of constraints. This has opened up a wide range of new application domains. I will discuss what led to such a dramatic scale-up, emphasizing recent advances on combining probabilistic and logical inference techniques.

    Bio. Bart Selman is a professor of computer science at Cornell University. His research interests include efficient reasoning procedures, planning, knowledge representation, and connections between computer science and statistical physics. He has (co-)authored over 150 papers, which have appeared in venues spanning Nature, Science, Proc. Natl. Acad. of Sci., and a variety of conferences and journals in AI and Computer Science. He has received six Best Paper Awards, and is an Alfred P. Sloan Research Fellowship recipient, a Fellow of AAAI, a Fellow of AAAS, and a Fellow of the ACM.


    Mubarak Shah, Central Florida, [intermediate/advanced] Visual Crowd Surveillance

    Shah

    Summary. I will start with a brief overview of current Video Surveillance systems which follow the following steps: detection of moving objects, tracking of those objects from frame to frame, categorization of objects into different classes, and recognition of their behavior. Next, I will introduce visual analysis of crowded scenes and present our framework employing hydrodynamics approach. In particular, I will discuss solutions to three problems: Crowd flow Segmentation and Stability analysis, Tracking in High Density Crowd using Floor field models, and Identifying Behaviors in Crowded Scenes Through Stability Analysis for Dynamic Scenes. Finally, I will introduce Wide Area Surveillance problem, and present our approach for Many-Many Correspondence for Tracking of Swarms of Objects in Wide Area Aerial Videos. If time permits I will discuss our work on detecting motion patterns in crowded scenes.

    Bio. Dr. Mubarak Shah, Agere Chair Professor of Computer Science, is the founding director of the Computer Visions Lab at UCF. He is a co-author of three books (Motion-Based Recognition (1997), Video Registration (2003), and Automated Multi-Camera Surveillance: Algorithms and Practice (2008)), all by Springer.  He has published extensively on topics related to visual surveillance, tracking, human activity and action recognition, object detection and categorization, shape from shading, geo registration, photo realistic synthesis, visual crowd analysis, bio medical imaging, etc. Dr. Shah is a fellow of IEEE, IAPR, AAAS and SPIE. In 2006, he was awarded the Pegasus Professor award, the highest award at UCF, given to a faculty member who has made a significant impact on the university, has made an extraordinary contribution to the university community, and has demonstrated excellence in teaching, research and service. He is ACM Distinguished Speaker. He was an IEEE Distinguished Visitor speaker for 1997-2000, and received IEEE Outstanding Engineering Educator Award in 1997. He received the Harris Corporation's Engineering Achievement Award in 1999, the TOKTEN awards from UNDP in 1995, 1997, and 2000; Teaching Incentive Program awards in 1995 and 2003, Research Incentive Award in 2003 and 2009, Millionaires' Club award, University Distinguished Researcher award in 2007 and 2012, SANA award in 2007, an honorable mention for the ICCV 2005 Where Am I? Challenge Problem, and was nominated for the best paper award in ACM Multimedia Conference in 2005.  He is an editor of international book series on Video Computing; editor in chief of Machine Vision and Applications journal, and an associate editor of ACM Computing Surveys journal. He was an associate editor of the IEEE Transactions on PAMI, and a guest editor of the special issue of International Journal of Computer Vision on Video Computing. He was the program co-chair of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.


    Ron Shamir, Tel Aviv, [introductory] Revealing Structure in Disease Regulation and Networks

    Shamir

    Summary. We shall describe and demonstrate computational methods that analyze large-scale genomics and proteomics data in order to reveal structure in biological systems and networks. We shall describe methods for identifying modules in gene expression data using clustering and biclustering algorithms tailored specifically for these kinds of data. We shall show how to use together expression data and protein interaction networks in order to find active subnetworks in the data. For case-control expression data we shall show methods for revealing modules dysregulated in the disease patients compared to the controls. On the regulation side, we shall describe algorithms for de novo reconstruction of binding motifs based on co-regulated gene sets.

    The emphasis in the description will be on the algorithmic methods. Our approach combines techniques from graph algorithms, probability and machine learning. The biological context will be motivated and performance will be demonstrated on real biological datasets. Many of the algorithms are integrated into the Expander suite, which provides a one-stop shop for gene expression analysis.

    References

    Bio. Prof. Ron Shamir leads the Computational Genomics group at the Blavatnik School of Computer Science, Tel Aviv University (TAU). He is the head of the Edmond J. Safra Center for Bioinformatics at TAU and holds the Raymond and Beverly Sackler Chair in Bioinformatics. He develops novel algorithmic methods in Bioinformatics and Systems Biology. His research interests include gene expression analysis, modeling and dissection of molecular networks, gene regulation and cancer genomics. Methods and software tools developed by Shamir’s group are in use by many laboratories around the world.

    Prof. Shamir received a BSc in Mathematics and Physics from the Hebrew University, and a PhD in Operations Research from UC Berkeley in 1984. He is on the faculty of TAU since 1987. He has published over 230 scientific works, including 17 books and edited volumes, and has supervised 45 graduate students. He is on the editorial board of eleven scientific journals and series, and was on the RECOMB Conference series steering committee for thirteen years. In 2000 he founded the Bioinformatics undergraduate program at TAU. He co-founded the Israeli Society of Bioinformatics and Computational Biology, and was society president in 2004-2006. He is a recipient of the 2011 Landau Prize in Bioinformatics, and a Fellow of the International Society for Computational Biology and of the Association for Computing Machinery.



    Dawn Xiaodong Song, Berkeley, [introductory] Introduction to Software Security, Web Security and Mobile Security

    Song

    Summary. In this course, we will cover foundations of software security, web security and mobile security. We will learn about vulnerabilities, attacks and defenses in these areas, covering basic concepts as well as state-of-the-art techniques and research results.

    Bio. Dawn Song is Associate Professor of Computer Science at UC Berkeley. Prior to joining UC Berkeley, she was an Assistant Professor at Carnegie Mellon University from 2002 to 2007. Her research interest lies in security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, database security, distributed systems security, to applied cryptography. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the IBM Faculty Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, and Best Paper Awards from top conferences.


    Mike Thelwall, Wolverhampton, [introductory] Sentiment Strength Detection for the Social Web

    Thelwall

    Summary. This course will offer an introduction to automatic sentiment analysis in the social web and equip participants to apply this kind of method in appropriate contexts and to understand a range of different approaches for sentiment analysis, although focusing on the SentiStrength application. The course will also discuss the particular problems associated with sentiment analysis in the social web as well as strategies to take advantage of features like emoticons to improve the automatic prediction of online sentiment. The main topics will be: Introduction to sentiment strength detection: introduction to SentiStrength for short informal text sentiment strength detection, discussion of major issues in sentiment strength detection online and ways to exploit non-standard methods of expressing emotion online. Sentiment analysis applications: Cross-validation as an evaluation strategy, feature selection and creating a gold standard accuracy measures. Evaluation of SentiStrength and other systems. Adapting systems for different domains and languages: The importance of domain in sentiment analysis, methods for transferring methods from one domain to another. Domain-independent methods. Language issues and methods to translate a sentiment analysis method from one language to another.

    References

    (The first three downloadable from: http://www.scit.wlv.ac.uk/~cm1993/mycv.html)

    Bio. Professor of Information Science and leader of the Statistical Cybermetrics Research Group at the University of Wolverhampton, UK and a research associate at the Oxford Internet Institute. He has developed tools for gathering and analysing web data, including hyperlink analysis, sentiment analysis and content analysis for Twitter, YouTube, blogs and the general web. His publications include 152 refereed journal articles, seven book chapters and two books, including Introduction to Webometrics. He is an associate editor of the Journal of the American Society for Information Science and Technology and sits on three other editorial boards. More information, here.


    Julita Vassileva, Saskatchewan, [introductory/intermediate] Engaging Users in Social Computing Systems

    Vassileva

    Summary. Social computing systems are becoming ubiquitous and have the potential to influence people’s behaviour to do good. Methods and applications aimed at influencing user behaviour have been developed in two emerging research areas Persuasive Computing and Social Computing. The focus of Persuasive Computing is on influencing the behaviour of an individual for her own benefit (e.g. eat healthier, exercise, quit smoking), while the focus of Social Computing is mostly on harvesting “the wisdom of crowds” for example, by providing personalized recommendations of products, movies, books, or news, based on correlating user ratings, and tags. A strong market is pushing these two areas: there are many health tracking gadgets (e.g. the Fitbit) ; health tracking or exercise apps have appeared on the app stores of the major mobile platforms, and nearly any shopping or news site on the web offers personalized recommendations (e.g. Netflix, Amazon, and others). There has been a growing overlap between these two areas in two directions:

    This class focuses on the first of these directions. This leads to the question of how to motivate people to engage in desirable, pro-social behaviour. The main learning objective of this course is to understand the theoretical foundations and the practical strategies for motivating users and engaging them in beneficial behaviours. It will be organized in three parts:

    1. Theories of motivation and behaviour change.
    2. Practical strategies for user engagement and behaviour change: persuasion and gamification.
    3. Incentive mechanism design: personalizing and adapting rewards to the needs of the community; personalizing persuasive interventions.

    References

    Bio. Dr. Julita Vassileva is Professor of Computer Science at the University of Saskatchewan, Canada. She received her PhD in 1992 from the University of Sofia, Bulgaria. She has been an active researcher in a wide range of applied computing areas, including AI and Education, User Modeling and Personalization, Multi-Agent and Peer-to-Peer Systems, Trust and Reputation Mechanisms, Agent Negotiation and Social Computing. She has published more than 150 peer-reviewed articles, which have been cited over 4000 times. She has given many keynote and invited talks and international conferences, workshops and seminars. She serves as associate editor of the International Journal of Continuing Engineering Education, and on the editorial boards of User Modeling and User Adaptive Interaction, Computational Intelligence and IEEE Transactions of Learning Technologies. She is a member of the Advisory board of UM Inc, and of the Executive Committee of the International Society for AI in Education. She held the prestigious Canadian NSERC Chair for Women in Science and Engineering from 2005 until 2011.


    Philip Wadler, Edinburgh, [introductory] Topics in Lambda Calculus and Life

    Wadler

    Summary. Four talks will cover a range of topics:

    References

    Bio. Philip Wadler likes to introduce theory into practice, and practice into theory. An example of theory into practice: GJ, the basis for Java with generics, derives from quantifiers in second-order logic. An example of practice into theory: Featherweight Java specifies the core of Java in less than one page of rules. He is a principal designer of the Haskell programming language, contributing to its two main innovations, type classes and monads.

    Wadler is Professor of Theoretical Computer Science at the University of Edinburgh. He is an ACM Fellow and a Fellow of the Royal Society of Edinburgh, served as Chair of ACM SIGPLAN, and is a past holder of a Royal Society-Wolfson Research Merit Fellowship. Previously, he worked or studied at Stanford, Xerox Parc, CMU, Oxford, Chalmers, Glasgow, Bell Labs, and Avaya Labs, and visited as a guest professor in Copenhagen, Sydney, and Paris. He has an h-index of 59, with more than 17,500 citations to his work according to Google Scholar. He is a winner of the POPL Most Influential Paper Award, has contributed to the designs of Haskell, Java, and XQuery, and is a co-author of Introduction to Functional Programming (Prentice Hall, 1988), XQuery from the Experts (Addison Wesley, 2004) and Generics and Collections in Java (O'Reilly, 2006). He has delivered invited talks in locations ranging from Aizu to Zurich.


    Gio Wiederhold, Stanford, [introductory] Software Economics: How Do the Results of the Intellectual Efforts Enter the Global Market Place

    Wiederhold

    Summary. This course is intended for students and professionals interested in the software industry at all levels. Their products have value, but the originators are rarely involved in assessing their value. It is left to business people, economists, lawyers, and promotors to judge their benefits. The course will present the available computational methods for valuation, their suitability and reliability in various settings. The course will provide an understanding how software products are moved into the marketplace and how the resulting intellectual property is exploited. It will introduce the necessary concepts and business terms. Spreadsheet omputations will be used to quantitatively compare alternatives. The understanding gained will have broader applicability than just software, but contribute to informed decision-making in high-tech product design, acquisition, production, marketing, selection of business structures, outsourcing, offshoring, and the impact of taxation policies.

    References

    Syllabus

    Pre-requisites. Since this material is not covered in tradition academic computer science curricula, the only prerequisite is an interest in the economics of computing products.

    Bio. Gio Wiederhold is professor emeritus of Computer Science at Stanford University. He holds a PhD from the University of California, San Francisco and an honorary D.Sc. from the National University of Ireland, Galway. He has been elected as a fellow of the ACM, the IEEE, and the ACMI.

    Gio was born in Italy and lived in Germany and The Netherlands. He emigrated to the United States in 1958. He worked sixteen working years in industry as a programmer, software designer, manager, and division director. He became a professor at Stanford in 1976, advising PhD students in several departments, integrating concepts from multiple disciplines. His students now have responsible positions in finance, academia, and industry. He currently teaches two courses at Stanford: Business on the Internet and Software Economics.

    Breaks from Stanford include tours as visiting professor at IIT Kanpur in India, at EPFL in Switzerland, representing UNDP at projects in Asia, as a researcher at IBM Germany, and as program manager at DARPA of the US Department of Defense, initiating the Intelligent Integration of Information (I3) program. I3 supported initiatives like the Digital Library, leading to several significant Internet applications and companies, including Google.


    Limsoon Wong, National Singapore, [introductory/intermediate] The Use of Context in Gene Expression and Proteomic Profile Analysis

    Wong

    Summary. The possibility of using gene expression profiling by microarrays for diagnostic and prognostic purposes has also generated much excitement and research in the last ten years. Many approaches have been proposed for the inference of differentially expressed genes that are useful in the diagnosis of diseases and prognosis of treatment responses. However, the statistical significance of the selected genes and the reproducibility of the resulting diagnosis system have a high degree of uncertainty. Furthermore, the transition from the selected genes to an understanding of the sequences of causative molecular events is unclear. Therefore, it is often necessary to analyze microarray experiments together with biological information to make better biological inferences. In the first part of this course, we review approaches that make use of biological pathways (e.g., enzymatic pathways, gene regulatory pathways, and protein interaction networks), for improving gene selection, and for transitioning from the selected genes to the understanding of the sequences of causative molecular events.

    Similarly, current advances in proteomics offers enhanced opportunities in functional analyses, and biomarker/drug discovery. This platform provides direct information on over 500k moieties, and able to reveal critical variations at the post-translational stage.  But harnessing the opportunities afforded by proteomics is a non-trivial challenge. High instrument sensitivity, dynamic range limitations and sample complexities hamper our ability to properly sample the proteome space in a controlled and consistent manner. To overcome this, proper combination of computational biology and proteomics is required and in turn offers unparallel power of analysis. Unfortunately, in many instances, experimental setup and procedures renders the data less than optimal for proper statistical and computational analysis. Moreover, there exist several pitfalls in each stage of analysis the researcher ought to be aware of, and should take proper caution in order to maximize the analytical outcome of the time and cost invested in performing the biological experiment. The second part of this course examines the various stages of setting up and planning a proteomics experiment, the associated pitfalls, and ways in which appropriate computational analysis can be deployed, given time and resource limitations.

    Bio. Limsoon Wong is a provost's chair professor of computer science and a professor of pathology at the National University of Singapore. He currently works mostly on knowledge discovery technologies and their application to biomedicine. Some of Limsoon’s papers are among the best cited of their respective fields. He has/had served on the editorial boards of Information Systems, Journal of Bioinformatics and Computational Biology, Bioinformatics, IEEE/ACM Transactions on Computational Biology and Bioinformatics, Drug Discovery Today, and Journal of Biomedical Semantics. He co-founded and is chairman of Molecular Connections, a provider of data curation services employing over 700 curators, analysts, and engineers. He received his BSc(Eng) in 1988 from Imperial College London and his PhD in 1994 from University of Pennsylvania.


    Michael Wooldridge, Oxford, [introductory] Autonomous Agents and Multi-Agent Systems

    Wooldridge

    Summary. Over the past two decades, the field of multi-agent systems has grown into one of the largest sub-fields of artificial intelligence. Contemporary research on multi-agent systems draws upon a host of other research areas, including subjects as diverse as robotics and game theory. The aim of this course is to give a high-level introduction to the aims, scopes, and methods of contemporary multi-agent systems research.

    References

    Syllabus

    Pre-requisites. The course assumes a knowledge of computer science that would be gained through an undergraduate computing degree or similar, and ideally, some knowledge of artificial intelligence and discrete math notation (set notation, very basic logic).

    Bio. Michael Wooldridge is a Professor in the Department of Computer Science at the University of Oxford.  He has been active in multi-agent systems research since 1989, and has published over three hundred articles in the area.  His main interests are in the use of formal methods for reasoning about autonomous agents and multi-agent systems.  Wooldridge was the recipient of the ACM Autonomous Agents Research Award in 2006. He is an associate editor of the journals "Artificial Intelligence" and "Journal of AI Research (JAIR)". His introductory textbook "An Introduction to Multiagent Systems" was published by Wiley in 2002 (Chinese translation 2003; Greek translation 2008; second edition 2009). He will be general chair of IJCAI-2015, to be held in Buenos Aires, Argentina.


    Ronald R. Yager, Iona, [introductory/intermediate] Fuzzy Sets and Soft Computing

    Yager

    Summary. We review the basic foundations for constructing multi-criteria decision functions using fuzzy set methods. We emphasize the ability of this approach to enable the modeling of linguistically specified decision functions. We discuss the inclusion of criteria importance weights. We look at various types of valuation for criteria by an alternative; numeric, ordinal, intuitionistic and interval value. We look at generalized aggregation methods for constructing multi-criteria decision functions using the OWA Operator and the Choquet Integral.

    Computer mediated social networks are now an important technology for world–wide communication, interconnection and information sharing. Another goal here is to enrich social network modeling by introducing ideas from fuzzy sets. We approach this extension in two ways. One is with the introduction of fuzzy graphs representing the networks. This allows a generalization of the types of connection between nodes in a network from simply connected or not to weighted or fuzzy connections. A second and perhaps more interesting extension is the use of the fuzzy set based paradigm of computing with words to provide a bridge between a human network analyst's linguistic description of social network concepts and the formal model of the network. We also will describe some methods for sharing information obtained in these types of networks. In particular we discuss linguistic summarization and tagging methods.

    Bio. Ronald R. Yager is Director of the Machine Intelligence Institute and Professor of Information Systems at Iona College. He is editor and chief of the International Journal of Intelligent Systems. He has published over 500 papers and edited over 30 books in areas related to fuzzy sets, human behavioral modeling, decision-making under uncertainty and the fusion of information. He is among the world’s top 1% most highly cited researchers with over 31000 citations in Google Scholar. He was the recipient of the IEEE Computational Intelligence Society Pioneer award in Fuzzy Systems. He received the special honorary medal of the 50-th Anniversary of the Polish Academy of Sciences. He received the Lifetime Outstanding Achievement Award from International the Fuzzy Systems Association. He recently received honorary doctorate degrees, honoris causa, from the State University of Information Technologies, Sofia Bulgaria and the Azerbaijan Technical University in Baku. Dr. Yager is a fellow of the IEEE, the New York Academy of Sciences and the Fuzzy Systems Association. He has served at the National Science Foundation as program director in the Information Sciences program. He was a NASA/Stanford visiting fellow and a research associate at the University of California, Berkeley. He has been a lecturer at NATO Advanced Study Institutes. He is a visiting distinguished scientist at King Saud University, Riyadh Saudi Arabia. He is a distinguished honorary professor at the Aalborg University Denmark. He received his undergraduate degree from the City College of New York and his Ph. D. from the Polytechnic Institute New York University.


    Philip S. Yu, Illinois Chicago, [advanced] Mining Big Data

    Yu

    Summary. The problem of big data has become increasingly importance in recent years. On the one hand, the big data is an asset that potentially can offer tremendous value or reward to the data owner. On the other hand, it poses tremendous challenges to realize the value out of the big data. The very nature of the big data poses challenges not only due to its volume, and velocity of being generated, but also its variety and veracity. Here variety means the data collected from various sources can have different formats from structured data to text to network/graph data to image, and also different degrees of cleanness and completeness. It can also mean variability or non-uniformity of the data collection. Veracity concerns the trustworthiness of the data as the various data sources can have different reliability. In this course, we will discuss these issues and the various approaches to address them.

    Bio. Philip S. Yu is currently a Professor in the Department of Computer Science at the University of Illinois at Chicago and also holds the Wexler Chair in Information Technology. He spent most of his career at IBM Thomas J. Watson Research Center and was manager of the Software Tools and Techniques group. His research interests include data mining, privacy preserving data publishing, data stream, social networking, and database systems. Dr. Yu has published more than 720 papers in refereed journals and conferences with an h-index of 100. He holds or has applied for more than 300 US patents. Dr. Yu is a Fellow of the ACM and the IEEE. He is currently the Editor-in-Chief of ACM Transactions on Knowledge Discovery from Data and has served as the Editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering from 2001 to 2004. He was an IBM Master Inventor. Dr. Yu received a Research Contributions Award from IEEE Intl. Conference on Data Mining in 2003. Dr. Yu received the B.S. Degree in E.E. from National Taiwan University, the M.S. and Ph.D. degrees in E.E. from Stanford University, and the M.B.A. degree from New York University.