Upcoming Seminars / Seminars This Week

Fri 2/23/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars

Fri 2/23/18 2 pmMandel G12
Special Seminar (Science Policy Initiative)
Irene Bosch (Mount Sinai School of Medicine; MIT)
Public Health, Dengue Fieldwork, and Government Policy
Refreshments will be served

Mon 2/26/18 11 amVolen 201
Computational Neuroscience Journal Club
Paul Miller (Brandeis University)
Part 2: Homeostasis with single or multiple mechanisms -- preliminary results
Pizza will be served.

Mon 2/26/18 12 noonVolen 101
Computer Science Seminar (Machine Learning / Data Science Faculty Search)
Dr. Jie Liu (University of Wisconsin-Madison)
Machine Learning for Understanding the Dynamics of Cell Populations
Abstract: New technologies allow us to understand many biological processes at the molecular level but require principled machine learning methods to capture the underlying dynamics of the cell populations. In this talk, I present two projects. In the first project, we design a dynamic graphical model to jointly analyze different types of genomic aberrations from multi-location/multi-time biopsies of metastatic breast cancer. The model allows us to accurately characterize genomic aberrations and understand oncogenic processes from next-generation sequencing data at a significantly larger scale. In the second project, we propose a dimensionality reduction approach to recover intrinsic biological structure from single cell Hi-C contact maps. With mouse ES cells, our dimensionality reduction approach successfully recovers the intrinsic cell-cycle manifold, and shows its robustness in terms of the number of contacts.

Speaker Bio: Dr. Jie Liu received his Ph.D. in computer science from the University of Wisconsin-Madison. He is currently a Moore/Sloan Data Science Postdoctoral Fellow in the Genome Sciences Department and eScience Institute at the University of Washington. His research interests are machine learning and its applications in biomedical informatics.

Hosted by Pengyu Hong

Mon 2/26/18 12 noonGerstenzang 124
Molecular Genetics Journal Club
Travis Rogers (Sengupta Lab)
Molecular Determinants of the Regulation of Development and Metabolism by Neuronal eIF2a Phosphorylation in Caenorhabditis elegans
Nikita Alimov (Goode Lab)
Extracellular matrix stiffness and cell contractility control RNA localization to promote cell migration

Tue 2/27/18 12 noonGerstenzang 121
M.R. Bauer Colloquium Series
Jeff Holt (Children's Hospital)
Function, Dysfunction and Restoration of Auditory Sensory Transduction Channels
Hosted by Piali Sengupta and Paul Garrity

Tue 2/27/18 2 pmGoldsmith 317
Thesis Seminar (Graduate Program in Mathematics)
Devin Murray (Brandeis University)
The Morse Boundary and CAT(0) Geometry
Hosted by Prof. Ruth Charney

Tue 2/27/18 4 pmAbelson 131
Physics Department Colloquium
Philip Nelson (University of Pennsylvania)
The physics, biology, and technology of resonance energy transfer
Resonance energy transfer has become an indispensable experimental tool for single-molecule and single-cell biophysics, and a conceptual tool to understand bioluminescence and photosynthesis. Its physical underpinnings, however, are subtle: It involves a discrete jump of excitation from one molecule to another, and so we regard it as a strongly quantum-mechanical process. And yet its first-order kinetics differ from what many of us were taught about two-state quantum systems; quantum superpositions of the states do not seem to arise; and so on. The key step involves acknowledging quantum decoherence. Ref: P C Nelson, Biophys J in press (2018).

Hosted by Jané Kondev

Wed 2/28/18 12 noonVolen 101
Computer Science Seminar (Machine Learning / Data Science Faculty Search)
Yingzhen Yang (University of Illinois)
Pursuit of Low-Dimensional Structures for Machine Learning
Abstract: Low-dimensional structures widely exist in high-dimensional data. It is important for machine learning methods to exploit such structures to effectively reveal the nature of high-dimensional data and achieve compelling performance in practice. Representative low-dimensional structures are subspaces and manifolds. In this talk, I will mainly present my research in subspace learning, manifold learning, and the method of learning low-dimensional structures in deep convolutional neural networks. I will first introduce L0-induced sparse subspace clustering (L0-SSC) which has been widely regarded as an important work in the subspace learning literature. In contrast to the required assumptions on subspaces and data for most existing sparse subspace clustering methods, L0-SSC can probably recover arbitrary distinct underlying subspaces in high-dimensional data. I will then introduce our manifold learning method for sparse graph and sparse coding, namely neighborhood regularized sparse graph and support regularized sparse coding. Both are motivated by the principle of manifold learning and encouraging locally smooth data representations. In addition, a neural network is designed to approximate the support regularized sparse codes, revealing the benefits and potential of combining manifold learning and deep learning. I will finally present a compact architecture for deep convolutional neural networks named 3D-FilterMaps. 3D-FilterMaps learns low-dimensional structures in the parameter space of convolutional neural networks, and enjoys much smaller parameter space while maintaining performance comparable to the baseline network. The above methods in pursuit of low-dimensional structures demonstrated superior performance compared to other competing methods and they are expected to generate broader impact in the future.

Speaker Bio: Yingzhen Yang is a research scientist at Snap Inc. Research. He received his Ph.D. degree from the University of Illinois at Urbana-Champaign in 2016. His research interests lie in statistical machine learning, deep learning and optimization for machine learning. Most of his research work focuses on pursuit of low-dimensional structures in high-dimensional data, such as subspace learning and manifold learning, with applications to computer vision. His work on sparse subspace clustering is among the best paper finalists at European Conference on Computer Vision 2016. He received the Carnegie Institute of Technology Dean's Tuition Fellowship in 2010. He has served as program committee member and reviewer for several leading conferences and journals in the fields of machine learning, computer vision and artificial intelligence, including NIPS, JMLR, CVPR and IJCAI.

Hosted by Pengyu Hong

Wed 2/28/18 12 noonGerstenzang 124
Neurobiology Journal Club
Dingbang Ma (Rosbash Lab)
VRILLE Controls PDF Neuropeptide Accumulation and Arborization Rhythms in Small Ventrolateral Neurons to Drive Rhythmic Behavior in Drosophila
Ref: Kusan et al. Curr Biol. 2017 Nov 20;27(22):3442-3453.e4.
Meredith Blankenship (Katz Lab)
Experience-Dependent Plasticity of Odor Representations in the Telencephalon of Zebrafish
Ref: Jacobson et al (2018). Current Biology

Wed 2/28/18 12 noonGoldsmith 300
Everytopic Seminar
Konstantin Matveev (Brandeis)
Towards a categorification of a projection from an affine to a finite Hecke algebra in type A
Braid groups are omnipresent both in topology and in representation theory. In the last decades it was realized that rich structures arising in the latter give many new construction of knot invariants. All these construction involve some form of categorification, i.e. extracting numerical information from higher level structures in the categories of representations of quantum groups, Soergel bimodules, matrix factorizations and many other settings. We will try to elucidate a small part of this vast program. Work of Bezrukavnikov on local geometric Langlands correspondence and works of Gorsky, Neguţ, Rasmussen and Oblomkov, Rozansky on knot homology and matrix factorizations suggest that there should be a categorical version of a certain natural homomorphism from an affine Hecke algebra to a finite Hecke algebra in type A, sending basis lattice elements on the affine side to Jucys-Murphy elements on the finite side. I will try to explain some of the structures involved and will talk about recent progress towards a construction of such a categorification in the setting of Hecke categories.
Hosted by Profs. Konstantin Matveev and Corey Bregman

Thu 3/1/18 4:30 pmGoldsmith 317
Joint Mathematics Colloquium
Moon Duchin (Tufts University)
Some mathematically interesting aspects of redistricting
"Gerrymandering" is the careful selection of boundaries in a partition to skew the predominant properties of the pieces--a prominent example is selecting congressional district boundaries to advantage one political party over the other. An emerging scientific consensus tells us that the best way to identify gerrymanders is to study when a districting plan is an outlier in the space of constrained partitions; random walk methods (Markov chain Monte Carlo in particular) are ascendant for searching the space of possible districting plans. This is a very juicy problem for mathematicians, with nontrivial contributions to be made from combinatorics and network theory, geometry, topology, and dynamics. I will overview a mathematization of the redistricting problem and highlight some interesting questions and results from these various fields.
Refreshments at 4pm in Goldsmith 100
Hosted by Dr. An Huang

Fri 3/2/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars
Justin Shin (Jadhav Lab)
Rylie Walsh (Rodal Lab)

Fri 3/2/18 3 pmVolen 201
Cognitive Neuroscience Journal Club
Hosted by Laura Paige

Mon 3/5/18 12 noonVolen 101
Computer Science Seminar (Machine Learning / Data Science Faculty Search)
Garrett Katz (University of Maryland)
Modeling embodied cognition in robots and humans
Abstract: Embodied cognitive models with biological relevance are significant for numerous application areas, ranging from disaster recovery, assisted living, and unmanned exploration in robotics, to prediction and treatment of cognitive disorders in humans. In this talk I will present two branches of my research in this area. First, I will describe the methodology behind, and empirical evaluation of, a cognitive robotics framework based on cause-effect reasoning and imitation learning. Second, I will present a novel numerical technique for analyzing recurrent neural network dynamics that are relevant to cognitive processing. Finally, I will discuss ongoing and future work to combine these branches, by re-implementing the cognitive framework using recurrent neural dynamics, and I will describe how the resulting system is being used to study the neurological basis of cognitive deficits in patients with post-traumatic stress disorder.

Speaker Bio: Garrett Katz is a postdoctoral researcher in the department of Computer Science at University of Maryland, College Park. He received his Ph.D in Computer Science from UMD in 2017. Before UMD he received his M.A. in Mathematics from City College of New York and his B.A. in Philosophy from Cornell University. His research interests include machine learning, artificial intelligence, robotics, and neural computation.

Hosted by Olga Papaemmanouil

Tue 3/6/18 12:30 pmGerstenzang 121
M.R. Bauer Colloquium Series
Antonio Giraldez (Yale)
Uncovering new regulatory codes during development: genome activation and post-transcriptional regulation
Hosted by Sebastian Kadener

Wed 3/7/18 12 noonVolen 101
Computer Science Seminar (Machine Learning / Data Science Faculty Search)
Kwang-Sun Jun (University of Wisconsin-Madison)
Adaptive Machine Learning with Multi-Armed Bandits
Abstract: We are in the middle of the AI revolution with the success of AlphaGo, image classification, and speech recognition. However, the success relies on a large amount of data, which raises numerous challenges for novel tasks since the data is usually not readily-available and takes money and time to collect. How can we minimize the data collection costs and train models efficiently with insufficient data? In this talk, I will talk about novel adaptive data collection and learning algorithms arising from the so-called multi-armed bandit framework and show their theoretical guarantees and their effectiveness in real-world applications. Specifically, I will show that my algorithms can quickly recommend personalized products to a novel user in a scalable way via a novel extension of online optimization algorithms. I will also discuss how biological experiments can be performed with a reduced amount of budget by adaptively selecting what experiments to run next.

Speaker Bio: Kwang-Sung Jun is a postdoctoral researcher at the University of Wisconsin-Madison Wisconsin Institute for Discovery, advised by Profs. Robert Nowak, Rebecca Willett, and Stephen Wright. His research focuses on adaptive and interactive machine learning that arises in real-world and interdisciplinary applications. Specifically, he works on multi-armed bandits, online optimization, and cognitive modeling, which has applications in personalized recommendation, adaptive biological experiments, and psychology. He received a Ph.D. in Computer Science from the University of Wisconsin-Madison under the supervision of Prof. Xiaojin (Jerry) Zhu.

Hosted by Jordan Pollack

Fri 3/9/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars
Weijin Xu (Rosbash Lab)
Katie Kimbrell (Paradis/Katz Lab)

Tue 3/13/18
Chemistry Department Colloquium
Susan Marqusee (UC Berkeley )
Hosted by K. Schmidt-Rohr & T. Pochapsky

Tue 3/13/18 12:30 pmGerstenzang 121
Joint Biology/Neuroscience Colloquium (Life Sciences Distinguished Speaker)
Thomas Park (The University Illinois at Chicago)
Hosted by Amy Lee

Tue 3/13/18 4 pmAbelson 131
Physics Department Colloquium
Rachel Mandelbaum (Carnegie Mellon University)
Cosmology with the Hyper Suprime-Cam (HSC) survey
Abstract: Hyper Suprime-Cam (HSC) is an imaging camera mounted at the Prime Focus of the Subaru 8.2-m telescope operated by the National Astronomical Observatory of Japan on the summit of Maunakea in Hawaii. A consortium of astronomers from Japan, Taiwan and Princeton University is carrying out a three-layer, 300-night, multiband survey from 2014-2019 with this instrument. In this talk, I will focus on the HSC survey Wide Layer, which will cover 1400 square degrees in five broad bands (grizy), to a 5 sigma point-source depth of r~26. We have covered 240 square degrees of the Wide Layer in all five bands, and the median seeing in the i band is 0.60 arcseconds. This powerful combination of depth and image quality makes the HSC survey unique compared to other ongoing imaging surveys. In this talk I will describe the HSC survey dataset and the completed and ongoing science analyses with the survey Wide layer, including galaxy studies, strong and weak gravitational lensing, but with an emphasis on weak lensing. I will demonstrate the level of systematics control, the potential for competitive cosmology constraints, some early results, and describe some lessons learned that will be of use for other ongoing and future lensing surveys.

Speaker Bio: Rachel Mandelbaum is an Associate Professor of Physics at Carnegie Mellon University (CMU). Her research focus within the field of observational cosmology is on measurements of structure growth and the halo-galaxy connection with weak gravitational lensing and galaxy clustering. Currently, she serves as co-chair of the weak lensing working group of the Hyper Suprime-Cam (HSC) survey, and as the Analysis Coordinator of the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC), tasked with overseeing the preparations for dark energy measurements with LSST. Mandelbaum received her PhD in physics from Princeton University, then took a Hubble Fellowship to the Institute for Advanced Study (IAS), followed by a research staff position in the Department of Astrophysical Sciences at Princeton University before moving to CMU in 2012. She has received the Annie Jump Cannon Prize from the American Astronomical Society, an Alfred P. Sloan fellowship, and a Department of Energy (DOE) Early Career Award. In addition to her scientific activities, she engages in several activities aimed at benefiting the broader astronomical community, including serving as a member of the Cosmic Visions Dark Energy group (for the broader DOE dark energy community) and as vice-chair of the Astronomy & Astrophysics Advisory Committee (AAAC).

Hosted by Marcelle Soares-Santos

Wed 3/14/18 1 pmVolen 201
Safety Training
Andy Finn
Lab Safety Training

Thu 3/15/18 2 pmAbelson 229
IGERT Speaker
Jonathan Touboul (Brandeis University, Mathematics)
Collective dynamics of random neural networks: complexity, synchronization and insights from random matrices theory
Neurons are electrically excitable cells that collectively process information to respond in a suitable, fast and adaptive manner to stimuli. I will present here a few thoughts and models on effective mathematical descriptions of large-scale neuronal networks and on the role of microscopic network parameters on collective dynamics of large neuronal networks.

Neural networks, with their asymmetric interactions, communication delays and spatial extension, display dynamics vastly distinct from classical models of equilibrium statistical physics. Deriving limits of large-scale networks and investigating their dynamics, I will exhibit in particular a mysterious and somewhat paradoxical result: neural networks may synchronize when noise or disorder exceed a specific value. Along the same lines, I will come back to a much more classical but equally mysterious transition exhibited some 30 years ago by Sompolinsky and co-workers between a fixed point regime to a chaotic regime as disorder increases. Using random matrix theory, I will show that this transition is related to an exponential explosion of fixed points, and that the complexity happens to be equal to the Lyapunov exponent of the chaotic dynamics, suggesting a possible microscopic explanation for the emergence of chaos in these networks.

If time allows, I will show that neural networks with balanced excitation and inhibition have a collective dynamics governed by the real or complex nature of an extreme eigenvalues of the connectivity matrix, and thus on new results we developed on the characterization of real eigenvalues of non-symmetric random matrices.

Hosted by Albion Lawrence

Fri 3/16/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars
Munzareen Khan (Sengupta Lab)
Wenbo Tang (Jadhav Lab )

Tue 3/20/18 12:30 pmGerstenzang 121
M.R. Bauer Colloquium Series
Shawn Xu (University of Michigan)
Hosted by Piali Sengupta

Tue 3/20/18 4 pmAbelson 131
Physics Department Colloquium
David Pritchard (Massachusetts Institute of Technology)
How 10 Years of Education Research Revealed My 40 Years of Bad Assumptions
Once upon a time, I thought our final exam measured what students should learn. But further investigations of exactly what students learned and what they learned it from, how much they remembered as seniors, the role of homework copying, the limitations of partial credit grading, and the disparity between what physics teachers want to teach and what their students want to learn have been disquieting. I shall discuss how we can help students learn what they should learn, and describe a new classroom pedagogy that helps students to become more expert. Then I'll describe how education research, development, and online learning might be combined to spread better learning universally.
Hosted by Matthew Headrick

Thu 3/22/18 12 noonLurias, Hassenfeld Conference Center
Martin Weiner Lecture Series in Psychology (NIGMS Brain, Body & Behavior Training Grant)
Dr. Richard N. Aslin (Yale University)
Learning and Attention in Infants: The Importance of Prediction in Development
I will review three lines of research from my lab that have implications for the normative course of development and for the diagnosis of deficits or delays in development among special populations. (1) Statistical learning is a rapid form of implicitly extracting information from the environment. It has been shown to be robustly present in infants, children, and adults. Children with Specific Language Impairment and adults with Autism Spectrum Disorder show different patterns of statistical learning. It may, therefore, serve as both a diagnostic tool and as a potential mechanism that underlies some developmental disorders. (2) The allocation of attention to gather information via statistical learning is controlled by both low-level stimulus salience and by predictive mechanisms. Infants allocate their attention to visual and auditory events so that they ignore both overly simple and overly complex information, while focusing mostly on information of medium complexity. Deviations from this normative pattern of allocating attention may contribute to some developmental disorders. (3) The infant brain must make predictions about upcoming stimuli. We have shown using a brain imaging technique called functional near-infrared spectroscopy (fNIRS) that an auditory cue can predict a visual stimulus, and even in the absence of the visual stimulus this prediction will elicit a brain response in the visual cortex. A follow-up study of prematurely born infants revealed that this brain signature of prediction is absent, despite these at-risk infants (tested at their corrected age) showing predictions at the behavioral level.
Hosted by Bob Sekuler

Fri 3/23/18 11:15 am
Biochemistry-Biophysics Friday Lunchtime Pizza Talks
Owen McManus (Q-State Biosciences)
Hosted by Chris Miller

Fri 3/23/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars
Brenda Lemos (Haber Lab)
Chenghao Liu (Nelson/Katz Lab)

Mon 3/26/18
Chemistry Department Colloquium
Wenshe Liu (Texas A&M)
Hosted by I. Krauss & L. Hedstrom

Tue 3/27/18 12:30 pmGerstenzang 121
M.R. Bauer Colloquium Series
Sarah Ross (University of Pittsburgh)
Hosted by Sue Paradis

Tue 3/27/18 4 pmAbelson 131
Physics Department Colloquium
Anushya Chandran (Boston University)
Hosted by Matthew Headrick

Thu 3/29/18 12 noonLurias, Hassenfeld Conference Center
Psychology Department Colloquium
Dr. Joshua O. Goh (National Taiwan University)
Hosted by Angela Gutchess

Thu 3/29/18 2 pmAbelson 229
IGERT Speaker
Pathikrit Bhattacharya (Tufts University)
Hosted by Albion Lawrence

Fri 3/30/18 12:30 pmGerstenzang 123
Molecular and Cell Biology & Neuroscience Student Seminars
Roshan Nanu (Lisman/Jadhav/Katz Labs)
Raul Ramos (Turrigiano Labs)

415 South Street, Waltham, MA 02453 (781) 736-2000