Catalog Advanced Search

Search by Categories
Search by Format
Search by Type
Search by Date Range
Products are filtered by different dates, depending on the combination of live and on-demand components that they contain, and on whether any live components are over or not.
Start
End
Search by Keyword
Sort By
  • Development of an Automated High Throughput CHO Stable Pool Platform for Generating Large Protein Collections

    Contains 1 Component(s) Recorded On: 02/05/2018

    Innovative solutions ranging from a new software Dashboard to manage projects and execute processes, a recently developed non-invasive Flask Density Reader and an upgraded harvest and purification system compatible with magnetic beads will be presented.

    Recombinant protein expression and purification is a central process in biomedical research and Chinese hamster ovary (CHO) cells are a primary workhorse for protein production from mammalian cells.  GNF has developed a robust suite of software and automated systems to support high throughput CHO (HT-CHO) stable pool establishment, archive of cell banks and protein purification.  Pools are established in 96-well plates, maintained until they are ready for scale up, and then expanded into an AutoFlask™.  Once cells reach the desired density, cell bank archives are created and one or more batch production AutoFlasks™ are inoculated depending on the amount of protein requested.  As an example, a single 50mL culture expressing a human IgG1 antibody typically yields 10 milligrams of protein.  Innovative solutions ranging from a new software Dashboard to manage projects and execute processes, a recently developed non-invasive Flask Density Reader and an upgraded harvest and purification system compatible with magnetic beads will be presented.  This platform enables cost-effective, facile production of proteins at quantities and quality useful for early stage drug discovery tasks such as screening, protein engineering and even in vivo studies. 

    Paul Anderson

    Gnf Systems

    BS Chemical Engineering, UCSB, 2003; MS Biomedical Engineering, Case Western Reserve University, 2005; Senior Automation Engineer, Genomics Institute of the Novartis Research Foundation, 2005-Present

  • Modeling the contribution of common variants to schizophrenia risk.

    Contains 1 Component(s) Recorded On: 02/05/2018

    Schizophrenia (SZ) is a debilitating psychiatric disorder for which the complex genetic mechanisms underlying the disease state remain unclear. Whereas highly penetrant variants have proven well-suited to human induced pluripotent stem cell (hiPSC)-based models, the power of hiPSC-based studies to resolve the much smaller effects of common variants within the size of cohorts that can be realistically assembled remains uncertain.

    Schizophrenia (SZ) is a debilitating psychiatric disorder for which the complex genetic mechanisms underlying the disease state remain unclear. Whereas highly penetrant variants have proven well-suited to human induced pluripotent stem cell (hiPSC)-based models, the power of hiPSC-based studies to resolve the much smaller effects of common variants within the size of cohorts that can be realistically assembled remains uncertain. We identified microRNA-9 as having significantly downregulated levels and activity in a subset of SZ hiPSC-derived neural progenitor cells NPCs, a finding that was corroborated by a larger replication cohort and further validated by an independent gene-set enrichment analysis of the largest SZ genome-wide association study (GWAS) to date. Overall, this demonstrated a remarkable convergence of independent hiPSC- and genetics-based discovery approaches.  In developing this larger case/control SZ hiPSC cohort of hiPSC-derived NPCs and neurons, we identified a variety of sources of variation, but by reducing the stochastic effects of the differentiation process, we observed a significant concordance with two large post mortem datasets. We predict a growing convergence between hiPSC and post mortem studies as both approaches expand to larger cohort sizes. Meanwhile, we have been integrating CRISPR-mediated gene editing, activation and repression technologies with our hiPSC-based neural platform, in order to develop a scalable system for testing the effect of a manipulating the growing number of SZ-associated variants and genes in NPCs, neurons and astrocytes. Altogether, our objective is to understand the cell-type specific contributions of SZ risk variants to disease predisposition.

    Kristen Brennand

    ISMMS

    Kristen Brennand, PhD is an Associate Professor of Genetics and Genomics, Neuroscience and Psychiatry at the Icahn School of Medicine at Mount Sinai, in New York, New York. She trained in developmental and stem cell biology at Harvard University and in neurobiology during postdoctoral at the Salk Institute for Biological Studies. By combining expertise in stem cell biology and neurobiology, she has pioneered a new approach by which to study psychiatric disease. Dr. Brennand’s work is funded by the National Institutes of Health, the New York Stem Cell Foundation, the Brain Research Foundation and the Brain and Behavior Research Foundation.

  • MALDI-TOF-MS - A label free technology for high throughput screening

    Contains 1 Component(s) Recorded On: 02/05/2018

    In the past, the throughput of MS-based assay technologies was limited, but recent developments in the field of MALDI-TOF-MS devices and spotting technologies substantially increased the ability for miniaturization and speed of such approaches. The talk will shed light on challenges in this process and provides results of this application in high throughput screening projects.

    Mass spectrometry (MS) is an emerging technology for identifying and characterizing molecules that modulate biological targets, offering a label free, direct detection method. This technology enables the application of more physiologically relevant assays and reduces time and costs compared to current classical approaches increasing the efficiency of the drug discovery process.

    In the past, the throughput of MS-based assay technologies was limited, but recent developments in the field of MALDI-TOF-MS devices and spotting technologies substantially increased the ability for miniaturization and speed of such approaches. However, the application of MALDI is based on a matrix-compatible sample preparation step and is limited to a certain space of analytes. This requires the identification of MALDI compatible, physiological relevant assay conditions, as well as development of fast and reproducible liquid handling procedures. The talk will shed light on challenges in this process and provides results of this application in high throughput screening projects.

    Frank Buettner

    Boehringer-Ingelheim Pharma GmbH & Co.KG

    Laboratory Leader

  • SLAS2018 Innovation Award Finalist: Optical tools for single-cell manipulations and sequencing

    Contains 1 Component(s) Recorded On: 02/05/2018

    Here we describe cell labelling via photobleaching (CLaP), a method that enables instant, specific tagging of individual cells based on a phenotypic classification. This technique uses laser irradiation for crosslinking biotin on the plasma membrane of living cells and fluorescent streptavidin conjugates.

    Classical examination of tissue and cellular samples heavily relies on microscopy platforms, where molecular probes and a myriad of contrast agents are routinely used to investigate the molecular biology of cells. Nevertheless, a versatile, efficient and non-invasive approach to tag individual cells chosen upon observation is still lacking.

    Here we describe cell labelling via photobleaching (CLaP), a method that enables instant, specific tagging of individual cells based on a phenotypic classification. This technique uses laser irradiation for crosslinking biotin on the plasma membrane of living cells and fluorescent streptavidin conjugates. Furthermore, the very same instrument used to image cells can tag them based on their morphological characteristics, dynamic behavior and localization within the sample at a given time, or any visible feature that distinguishes particular cells from the ensemble. The incorporated mark is stable, non-toxic, retained for several days, and transferred by cell division but not to adjacent cells in culture. We combined CLaP with microfluidics-based single-cell capture followed by PCR assays and transcriptome-wide next-generation sequencing. We computed a number of quality control metrics to verify that CLaP does not interfere with protocols of sample preparation for transcriptomic experiments. To the best of our knowledge, CLaP is the first simple technology that allows correlating spatial and molecular information visible under a microscope when cells are individually sequenced. 

    Santiago Costantino

    University of Montreal

    Santiago Costantino received his PhD in ultrafast lasers from the Physics Department of the University of Buenos Aires in 2003. He moved to Canada for his postdoctoral training in microscopy and neuroscience at McGill University. He established his biophotonics lab at the Maisonneuve-Rosemont Hospital, Montreal University, in 2007. He is now an associate professor and his current research spans microengineering, image analysis and the development of medical tools for vision health.

  • Collaborative Phenotyping at King's College London: HipSci and the Stem Cell Hotel

    Contains 1 Component(s) Recorded On: 02/05/2018

    This presentation will review in particular the characterisation of a large panel of human induced pluripotent stem cells, focusing on the integration of high content imaging data with genomics.

    We work in the framework of the Human Induced Pluripotent Stem Cells Initiative (HipSci) project, funded by the Wellcome Trust and MRC (www.hipsci.org). Here, we will present in particular the characterisation of a large panel of human induced pluripotent stem cells, focusing on the integration of high content imaging data with genomics. Imaging over 100 human iPS cell lines from healthy donors we have observed evidence for inter-individual variability in cell behaviour. Cells were plated on different concentrations of fibronectin and phenotypic features describing cell morphology, proliferation and adhesion were obtained by high content imaging as in our previously reported method. Furthermore, we have used dimensionality reduction approaches to understand how different extrinsic (fibronectin concentration), intrinsic (cell line or donor) and technical factors affected variation. We have identified with our platform specific RNAs associated with intrinsic or extrinsic factors and single nucleotide variants that account for outlier cell behaviour.  We will also mention significant progress in the integration of dynamic imaging data with other datasets.  By leveraging the expertise derived on this project, we now provide to internal and external scientists a dedicated laboratory space for collaborative cell phenotyping to study how intrinsic and extrinsic signals impact on human cells to develop assays for disease modeling and drug discovery and to identify new disease mechanisms.

    Davide Danovi

    King's College London

    Davide Danovi holds an MD from University of Milan and a PhD in Molecular Oncology from the European Institute of Oncology where he demonstrated the causative role of the HdmX protein in human cancer. He completed his postdoctoral training working with Prof. Austin Smith and Dr. Steve Pollard at the University of Cambridge and at University College London where he developed a screening platform to isolate compounds active on human neural stem cells from normal or brain tumour samples. Prior to his current role, he worked as principal scientist at a novel biotechnology company founded to isolate drugs for regenerative medicine using innovative stem cell technologies.

  • Identification of new negative regulators of ciliogenesis in breast cancer cells through high-throughput siRNA screening

    Contains 1 Component(s) Recorded On: 02/05/2018

    Three-dimensional spheroid assays are considered valid models to recapitulate features of tumors and, combined with new technologies of automated imaging and analysis, will contribute to a better understanding of ciliogenesis and breast cancer and to an important step in anticancer drug research.

    Breast cancer is a major cause of death in women in the world. The basal subtypes, also recognized as triple negative breast cancers (TBNC), are the most aggressive type and account for the highest mortality rate in patients. Currently, there are no FDA approved targeted therapies for TNBC, and innovative approaches are necessary to develop new therapeutic options. The primary cilium is a membrane-bound, cell surface projection assembled from centrosomes and singularly expressed in the majority of cells in the human body, serving as a cellular 'antenna' in the recognition and transduction of extra-cellular stimuli, such as growth factors. This organelle forms during cellular quiescence and disassembles when cells enter the cell cycle and proliferate. Interestingly, primary cilia are frequently lost in malignant tumors, such as breast tumors. Thus primary cilia may play a repressive role in regulating cell proliferation and could lower breast cancer development. In order to identify negative regulators of ciliogenesis that could represent target for new drugs, we performed a high content screen using an arrayed library containing pooled siRNAs targeting 23,000 human genes in triplicate on Hs578T cells, a basal B breast cancer cell line which forms cilia at low frequency. Detecting cilia by automated immunofluorescence staining and imaging, we identified 350 candidate genes (~1-2%) that increased the number of ciliated cells. Candidate genes were retested in secondary screens in additional cell lines to distinguish the genes involved in cilia formation common to all cell lines and the ones specific to the (sub)types of (breast) cancer. There is overwhelming evidence that in vitro three-dimensional tumor cell cultures more accurately reflect the complex in vivo microenvironment than simple two-dimensional cell monolayers. In order to test the candidate genes from the 2D cell culture experiments in a tertiary screen to see their effect on tumor growth, migration and invasion, we grew Hs578T cells in ultra-low attachment (ULA) 96-well roundbottomed plates, where tumor cell suspensions formed a three-dimensional structure within 24 h. Three-dimensional spheroid assays are considered valid models to recapitulate features of tumors and, combined with new technologies of automated imaging and analysis, will contribute to a better understanding of ciliogenesis and breast cancer and to an important step in anticancer drug research. 

    Marion Failler

    NYU Pelmutter Cancer Institute

    Since my Pharmacy studies, I wanted to work in basic research. I did an internship in the Neuropharmacology Center of the Pharmacy University of Milan where I learned basic proteomic research (Mallei A, et al., 2014). During my Master’s degree, I was in charge of the validation of a small scale siRNA screen on ciliogenesis. During my Ph.D., I focused on the characterization of two new Nephronophthisis candidate genes (Failler M et al., 2014). I used high resolution imaging (SIM and STED microscopy) and participated in setting up this imaging platform at our institution (Alby C et al., 2015). I now wish to continue understanding the role of ciliary dysfunction in cancer. Under the supervision of my mentor, I performed high-throughput siRNA screen in a breast cancer cell lines and identified candidate genes that allow cilia growth in these cells. 

  • Integrating high resolution mass spectrometry with cheminformatics for standardized, routine non-targeted metabolomics

    Contains 1 Component(s) Recorded On: 02/05/2018

    Over the past 20 years, metabolomics has evolved into using either multi-targeted assays, usually with nominal mass resolution spectrometers, or non-targeted approaches with high resolution mass spectrometry. We will here show that how to merge targeted approaches with high quality non-targeted discovery metabolomics.

    Over the past 20 years, metabolomics has evolved into using either multi-targeted assays, usually with nominal mass resolution spectrometers, or non-targeted approaches with high resolution mass spectrometry. We will here show that how to merge targeted approaches with high quality non-targeted discovery metabolomics. We will highlight the importance of advanced, open access data processing, the proper use of quality controls and internal standards, and full reporting of raw data as well as result data. At the NIH West Coast Metabolomics Center, we use 17 mass spectrometers in the central facility for providing data, informatics services and collaborative research for over 400 projects and more than 25,000 samples per year. These services include commercial assays for plasma analytics, the p180 kit, in addition to steroid, bile acid and oxylipin assays for more than 100 target compounds. Most projects, however, use our three integrated non-targeted metabolomics assays: primary metabolism for up to 200 identified compounds per study using GC-TOF MS, complex lipids for more than 600 identified lipids per study using high resolution liquid chromatography / tandem mass spectrometry and more than 150 identified compounds per study for biogenic amines using hydrophilic interaction chromatography/ high resolution mass spectrometry.

    We use standardized data processing in free-access MS-DIAL 2.0 software that is far superior standard solutions with respect to data deconvolution, compound identification and false positive/false negative peak detection. This software is now integrated with MS-FINDER 2.0 software for predicting and annotating spectra of biomarkers with unknown chemical structures. Both programs work excellently for high resolution GC-MS and LC-MS data. In addition, we harness the power of legacy data from more than 2,000 projects we have acquired since 2004 that is available to the biomedical and biological research community at large, the BinVestigate interface to our BinBase metabolome database. We showcase how the integrated use of these resources identified novel epimetabolites in cancer metabolism, both on a prospective cohort scale (in lung cancer) and as new epitranscriptome metabolites from modified RNA molecules (in a range of cancers except for liver cancer). 

    Oliver Fiehn

    UC Davis, NIH West Coast Metabolomics Center

    Prof. Oliver Fiehn has pioneered developments and applications in metabolomics with over 220 publications to date. He aims at understanding metabolism on a comprehensive level. In order to leverage data from these diverse sets of biological systems, his research laboratory focuses on standardizing metabolomic reports and establishing metabolomic databases, for example the MassBank of North America that hosts over 200,000 public metabolite mass spectra and BinBase, a resource of over 100,000 samples covering more than 2,000 studies. He develops and implements new approaches and technologies in analytical chemistry for covering the metabolome, from increasing peak capacity by ion mobility to compound identifications through cheminformatics workflows and software. He collaborates with a range of investigators in human diseases through statistics, text mining and pathway-based mapping. He studies fundamental biochemical questions from metabolite damage repair to the new concept of epimetabolites.

  • Supervising the Unsupervised: Maximizing Biological Impact in Cellular Imaging

    Contains 1 Component(s) Recorded On: 02/05/2018

    Avoiding “black box” algorithms, instead favouring those which could be interrogated by biological and data scientists alike, led to faster and more relevant analysis cycles, and helped cement a “marriage” between statistical significance and biological relevance. Here, we discuss the analytical methodologies invoked to achieve this.

    The exciting challenge of imaging data is the sheer number of options to recognize and retrieve meaningful content; while some turn to the ever-growing algorithmic tool-shed of machine learning, others utilize a priori knowledge of the biology at hand to arrive at the answer. With a balance between these two paramount, we implemented a hybrid workflow to re-analyse compound data in a phenotypic COPD screen. Allowing biological subject matter expertise to guide data-driven decisions, and vice-versa, we used a combination of knowledge-based, supervised, and unsupervised methods to de-convolute patient-derived macrophages into patient-specific subpopulations. At this level of granularity, we could discern previously masked effects of compounds on healthy and diseased cells, both in their physical properties and population makeup. These differences proved to be key when understanding the underlying phenotypic changes. Avoiding “black box” algorithms, instead favouring those which could be interrogated by biological and data scientists alike, led to faster and more relevant analysis cycles, and helped cement a “marriage” between statistical significance and biological relevance. Here, we discuss the analytical methodologies invoked to achieve this.

    Finnian Firth

    GlaxoSmithKline

    Degree in mathematics and computational biology from Cambridge University. Started at GSK October 2016.

  • Developing and Implementing a Scientific Data Strategy for Pharma

    Contains 1 Component(s) Recorded On: 02/05/2018

    As the use of predictive modeling, analytics and machine learning increases to address the challenges of declining R&D productivity and increasing pressures for demonstrating product value, a cohesive scientific data strategy and scalable approaches are required to handle the ever increasing variety of data types, data sources, data models and analytics patterns.

    The discovery research paradigm requires integration of a broad range of human biology data and knowledge in order to generate and explore diverse hypotheses. Scientists often spend a significant amount of their time and resources in analytics and informatics projects trying to find, access, understand, curate and integrate data. While scientific information is generally managed effectively for its primary use, it often lacks the accessibility and context that facilitates secondary use and cross-functional integration on-demand. As a result, much of the research informatics efforts across the pharmaceutical industry are focused on creating single point solutions to these challenges within a particular problem space or functional area. As the use of predictive modeling, analytics and machine learning increases to address the challenges of declining R&D productivity and increasing pressures for demonstrating product value, a cohesive scientific data strategy and scalable approaches are required to handle the ever increasing variety of data types, data sources, data models and analytics patterns. It also calls for a reevaluation of data access rules, accountability, and data stewardship culture to realize business strategic goals while managing risk.

    Nicole Glazer

    Merck

    Nicole is currently a director in Merck's Scientific Information Management organization. She is an epidemiologist by training and began her career in academia conducting large-scale observational research studies before joining Merck. She now leads the Scientific Data Development team at Merck, responsible for defining and executing a data strategy to improve the utility of Merck’s scientific information across the company’s drug development pipeline through data-centric, analytics-focused solutions.

  • Novel Graphene Field Effect Biosensing Technology for Binding Kinetics

    Contains 1 Component(s) Recorded On: 02/05/2018

    ​We introduce a breakthrough electrical label-free biosensor that provides a new approach to measuring binding kinetics. This approach uses a label-free technique called Field Effect Biosensing (FEB) to measure biomolecular interactions.

    We introduce a breakthrough electrical label-free biosensor that provides a new approach to measuring binding kinetics. This approach uses a label-free technique called Field Effect Biosensing (FEB) to measure biomolecular interactions. Field effect biosensors use a semiconducting material to monitor changes in binding potential of biomolecules such as proteins, nuceotides, peptides, and small molecules conjugated to the semiconductor surface. Practical use of this technology for biology requires use of a biocompatible semiconductor such as graphene.  Graphene is a 2-dimensional sheet of sp2 hybridized carbon that is well known for its excellent electrical conductivity, high surface area, and unique biocompatibility. Basic electronic devices using graphene were first demonstrated in 2004; this work won the Nobel prize in 2010. In nanotechnology labs, graphene biosensors have pushed existing limits of detection for label free sensors and have shown the ability to measure a large range of biochemical interactions from detecting DNA SNPs to small molecules binding to GPCRs.

    We will present our architecture and implementation of graphene based FEB biosensors for label free kinetics. In our architecture, FEB measures the current through a graphene biosensor with targets conjugated to the surface and used as a functional active-biology gate dielectric. Any interaction or binding that occurs with the target causes a change in conductance that is monitored in real-time. We will also present data from our recently published research demonstrating sensitivity into the pM range to inflammation markers (IL-6) and Zika viral antigen (ZIKV NS1). High precision measurements of protein kinetics captured using this technology, commercially available as the Agile R100, are comparable to both ELISA and standard label free biomolecule characterization tools. Specifically, we show an improvement in signal-to-noise and in lower limit of detection. These results demonstrate that graphene-based platforms are highly attractive biological sensors for next generation kinetics characterization.

    Brett Goldsmith

    Nanomedical Diagnostics

    Late Night with LRIG