Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Genome Editing: A Practical Guide to Research and Clinical Applications
Genome Editing: A Practical Guide to Research and Clinical Applications
Genome Editing: A Practical Guide to Research and Clinical Applications
Ebook512 pages8 hours

Genome Editing: A Practical Guide to Research and Clinical Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Genome Editing: A Practical Guide to Research and Clinical Applications is geared towards investigators interested in learning how to use CRISPR-Cas9-based technologies, with a focus on cardiovascular research and clinical applications. Covering a range of topics from the basics of genome editing to design considerations, to assessments and applications, this reference allows readers to get started and establish a full workflow from the beginning of the project to its full completion. With worked examples drawn from real-life experiments, as well as troubleshooting and pitfalls to avoid, the book serves as an essential reference for researchers and investigators in both cardiovascular and biomedical research.
  • Help readers familiarise with the variety of genome-editing approaches that are being applied in cardiovascular research and medicine, i.e., both research applications and clinical applications
  • Understand the use of genome editing through worked examples (based on real-life experiments) in which CRISPR-Cas9 is employed, online tools to design CRISPR-Cas9 reagents, methods to interpret data from genome-editing experiments, the downsides of genome-editing technology - both the scientific and ethical pitfalls to avoid
  • Written in an easy-to-follow manner, guiding readers from the design of the project to its completion
  • Includes unpublished and new methods
LanguageEnglish
Release dateMar 5, 2021
ISBN9780128236475
Genome Editing: A Practical Guide to Research and Clinical Applications
Author

Kiran Musunuru

Kiran Musunuru, MD, PhD, MPH, ML received his medical degree from Weill Cornell Medical College, his PhD from The Rockefeller University, and his Master of Public Health from Johns Hopkins Bloomberg School of Public Health. He trained in Internal Medicine at Brigham and Women’s Hospital and Cardiovascular Medicine at Johns Hopkins Hospital, followed by postdoctoral work at Massachusetts General Hospital and the Broad Institute of MIT and Harvard. Dr Musunuru’s research focuses on the genetics of cardiovascular and metabolic diseases and seeks to identify naturally occurring genetic variants that predispose to or protect against disease and can be used to develop therapies to protect the entire population. His expertise includes the use of human pluripotent stem cells as a platform for disease modelling and the use of genome-editing tools such as CRISPR-Cas9 for research and therapeutic applications. In 2016, he received the Presidential Early Career Award for Scientists and Engineers from U.S. President Obama, as well as the American Heart Association’s Award of Meritorious Achievement. He became Editor-in-Chief of Circulation: Genomic and Precision Medicine (an American Heart Association journal) in 2018.

Related to Genome Editing

Related ebooks

Medical For You

View More

Related articles

Related categories

Reviews for Genome Editing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Genome Editing - Kiran Musunuru

    Science.

    Preface

    The inspiration for Genome Editing: A Practical Guide to Research and Clinical Applications was a series of interactive flipped classroom boot camps I began organizing at scientific conferences in 2016, not long after CRISPR-Cas9 genome editing had burst onto the scene as a transformative technology in biomedical research. The concept was to have attendees watch short didactic videos about genome editing prior to the conference and then sit at round tables and work in small groups through online exercises—practical examples of the use of genome editing tools—with guidance from instructors circulating around the room, followed by a group discussion of the solutions to the exercises. Because of the excitement about genome editing, the rooms were always full, with participants ranging from young trainees to senior principal investigators who wanted to find out about this groundbreaking new technique they had heard so much about. The boot camp format proved to be very popular, and feedback from the participants was invariably positive, but there were numerous requests to cover X, Y, and Z applications of genome editing in future boot camps. Keeping pace with the breathtaking progress in the genome editing field, the boot camp expanded and then fissioned into two boot camps, a basic version and an advanced version, several hours in duration. Even that was not enough—at the end of each boot camp, a long line of questioners would come up to me afterwards to ask for advice about applying genome editing tools to their own projects.

    This book has emerged from an effort to codify all the materials presented in the boot camps—the didactics, the exercises, and the explanations—as well as all the additional tips and tricks I have to offer in a readable and accessible format to serve the needs of researchers just getting started with genome editing and other editing approaches. This book also draws from my experience directing and teaching in an annual graduate level course in gene therapy and genome editing at the University of Pennsylvania, where so many innovations in these fields have emerged. Keeping up with the genome editing field has been a daunting task—I have had to revise the text up until the very last moment to cover important new advances being reported on what seems like a monthly or even weekly basis—and I have no illusions that when finally published the book will be fully up-to-date. But that is okay, because the point of the book is not to cover the very latest in genome editing technology. Rather, it is to impart the principles of genome editing, the process of designing experiments that takes the fullest advantage of the available tools. Even though it has been almost a decade since CRISPR-Cas9 was first used for the genome editing of mammalian cells, the same exact tools developed back then are still very much in use today and can often get the job done perfectly well.

    As a final note, many of the practical examples in this book are drawn from work in which I was directly involved or with which I am deeply familiar. There is no better way for me to be able to explain and to dissect both the strengths and the shortcomings of various genome editing applications, having experienced or witnessed them firsthand. As such, reflecting my own background as a cardiologist and a cardiovascular investigator, the examples tend to involve cardiovascular and metabolic research questions. Despite this predilection, this book serves as a general guide on how to do genome editing, with the lessons broadly applicable across all fields of biomedical research. Anybody with a basic biomedical research background should be able to get a lot out of the book. Knocking out a gene in the heart is not all that different from knocking out a gene in any other organ, after all. And because genome editing promises to increasingly touch people’s lives, there is some discussion of history and of ethics to round out the learning experience.

    Happy editing!

    Kiran Musunuru

    Chapter 1

    A brief history and primer on genome editing

    Abstract

    As is the case with most scientific advances, the development of genome editing as a technology involved painstaking work by numerous talented researchers. There are too many to fairly acknowledge in the brief history recounted in this chapter, but it is important to appreciate that all of these researchers, collectively, were responsible for the genome editing revolution.

    Keywords

    Genome editing; homologous recombination; knockout mice; homology-directed repair; non-homologous end joining; zinc-finger nuclease; transcription-activator like effector nuclease; TALEN; clustered regularly interspaced short palindromic repeats; CRISPR

    As is the case with most scientific advances, the development of genome editing as a technology involved painstaking work by numerous talented researchers. There are too many to fairly acknowledge in the brief history recounted in this chapter, but it is important to appreciate that all of these researchers, collectively, were responsible for the genome editing revolution.

    Homologous recombination

    The first successful attempts at practical genome editing involved an approach that, once developed, would prevail for decades and even now remains in broad use in the biomedical community—homologous recombination. This approach was pioneered in mouse embryonic stem cells, which are pluripotent and have the dual capacities to reconstitute a full-grown living mouse and, in principle, to be differentiated into any individual somatic cell type in vitro. Drs. Martin Evans and Matthew Kaufman at the University of Cambridge and, independently, Dr. Gail Martin at the University of California, San Francisco, were the first to establish mouse embryonic stem cell lines, reported in 1981 (Evans and Kaufman, 1981; Martin, 1981). This entailed the creation of mouse embryos via in vitro fertilization, isolation of pluripotent cells from the embryos, and culturing of the pluripotent cells into clonal cell lines. Over the next few years, Evans and his colleagues demonstrated that they could incorporate mouse embryonic stem cells into separately derived, intact mouse embryos, creating chimeric mice in which the stem cells contributed to all of the different cell types in the adult body (Bradley et al., 1984). They then used viruses to integrate foreign DNA into the genome in mouse embryonic stem cells, permanently modifying those cells. Upon addition of the modified stem cells into mouse embryos, the foreign DNA was present in many different cell types in the resulting chimeric mice, including germ cells that transmitted the foreign DNA to the next generation of mice (Robertson et al., 1986). This accomplishment opened the door to making genetically modified mice, i.e., knockout and knock-in mice—if a means for controllably modifying the mouse’s own genes could be devised.

    In parallel, during the 1980s, Dr. Mario Capecchi at the University of Utah and Dr. Oliver Smithies at the University of Wisconsin–Madison were independently working to modify genes directly in mouse embryonic stem cells (Doetschman et al., 1987; Thomas and Capecchi, 1987). They each had worked out the process known as homologous recombination, in which a stretch of DNA sequence in the genome in a cell could be replaced with a synthetic version of the same DNA sequence in which a desired alteration had been made (Smithies et al., 1985; Thomas et al., 1986). The synthetic version was made in the laboratory and introduced into cells as a double-stranded DNA vector. As long as there was enough matching between the endogenous DNA sequence and the synthetic DNA sequence—within two homology arms flanking the altered sequence—a spontaneous swapping could occur, tricking the cell into incorporating the alteration into its genome. This process proved to be extremely inefficient, with modification of about one in a thousand cells receiving the DNA vector.

    The inefficiency of homologous recombination centered on the necessity for an initiating event to start the process—it was not enough to simply have the juxtaposition of closely matching endogenous and synthetic DNA sequences. This initiating event was a double-strand break in the chromosome within the endogenous DNA sequence (Szostak et al., 1983). Only then would homologous recombination and incorporation of the desired alteration ensue. In Capecchi’s and Smithies’ works—and all subsequent attempts at homologous recombination with endogenous genes in mammalian cells for the next 15 years—the process relied on spontaneous double-strand breaks, as there was no way to deliberately induce the breaks. The low likelihood of the necessary double-strand break occurring in any given cell accounted for the rarity of correctly modified cells.

    Capecchi’s group devised a strategy to improve the yield of homologous recombination, the use of an antibiotic drug that ordinarily was lethal to mouse embryonic stem cells. They placed an antibiotic resistance gene in the synthetic DNA sequence. Any cell that correctly swapped in the synthetic DNA sequence would now survive exposure to the antibiotic. Capecchi’s group also selected against incorrect incorporation of the entire DNA vector into a random site in the genome by including a sensitivity gene for a second antibiotic in the backbone of the vector, separate from the region flanked by the homology arms that was intended for recombination. By using this dual positive-negative selection strategy, the investigators were able to increase the efficiency of gene targeting by up to three orders of magnitude (Mansour et al., 1988). Even with modern genome-editing tools, antibiotic selection strategies are still useful and sometimes necessary to achieve desired genetic alterations.

    By the late 1980s, the work of Evans, Capecchi, and Smithies had converged to make it possible to generate knockout mice and knock-in mice, with specific alterations in endogenous genes. Investigators could now carry out an extended sequence of steps to make gene-modified mice. They could grow mouse embryonic stem cells in the laboratory; use homologous recombination to inactivate a gene; introduce a specific mutation into a gene, or correct a gene mutation in a cell’s genome; grow a gene-modified stem cell into a large number of cells; add some of those cells into mouse embryos; use the embryos to establish pregnancies; and await the birth of mice in which some (but not all) cells had the gene modification. The resulting mice would be chimeras. It would be hoped that some of the germ cells in the chimeras were derived from the added stem cells and would therefore carry the gene modification. If so, breeding of the chimeras with other mice might yield some offspring that carried the gene modification in all of their cells—knockout or knock-in mice.

    In 1989, four different groups of researchers were the first to use the methods pioneered by Evans, Capecchi, and Smithies to generate gene-modified mice (Koller et al., 1989; Thomas and Capecchi, 1990; Thompson et al., 1989; Zijlstra et al., 1989). Many more successful demonstrations soon followed. While the process is time-intensive—it typically takes one to two years from start to finish—thousands of different gene-modified mice have been made, revolutionizing biomedical research. If a specific gene mutation has been linked to a disease in humans, it is often possible to make knock-in mice with analogous mutation. If those mice have a disease similar to the human disease, then investigators can intensively study the disease in the mice in a way that is not feasible or ethical in humans. Investigators can test experimental therapies in the gene-modified mice to obtain important information about the effectiveness and safety of the therapies before planning any clinical trials involving human patients.

    Despite the enormous value of genetically modified mice for biomedical research, they are often limited in their ability to faithfully replicate human physiology and disease. For example, this is particularly the case for cardiovascular biology; mice have substantially different lipoprotein metabolism from humans, and they do not naturally develop coronary atherosclerosis. Some human genes operate quite differently from their mouse orthologs or, in some cases, lack mouse orthologs altogether. In these circumstances, ideally investigators would be able to leverage model systems that better reflect the human condition, whether they be larger animals that are more similar to humans or whether they be human cells maintained in culture. Unfortunately, the refined methodologies developed over the course of decades for the routine manipulation of the mouse genome—centered on well-characterized and well-behaved mouse embryonic stem cell lines—are not so readily applied to other species. Nor have human cells proven to be as amenable to spontaneous homologous recombination as mouse embryonic stem cells, especially human somatic cell types.

    A solution to this problem has emerged through the development of successive generations of genome-editing tools. With the limiting step of genetic modification via homologous recombination being the occurrence of a double-strand break at the right place at the right time, these tools largely achieve the same purpose—allowing investigators to controllably introduce double-strand breaks at desired locations in the genome. Hereafter follows a description of the origins and applications of the most commonly used genome-editing technologies: zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) systems.

    Zinc-finger nucleases (ZFNs)

    Arguably the first step towards modern genome editing came when Sir Aaron Klug at the Medical Research Council Laboratory of Molecular Biology in Cambridge, England, published the first description of zinc fingers in 1985 (Miller et al., 1985). His group had been studying the protein transcription factor IIIA (TFIIIA) in Xenopus laevis oocytes. TFIIIA was known to bind to a transcription control element in the gene coding for 5 S RNA. The investigators determined that the protein carried about 7 to 11 zinc ions required to maintain its structure. With proteolytic digestion, they found that TFIIIA contained multiple small domains about 3 kilodaltons in size, and in studying the protein’s amino acid sequence, they determined that it harbored nine repeats of a roughly 30-amino-acid unit, each with two conserved cysteine residues and two conserved histidine residues. They therefore proposed a model in which each unit folded into a finger with a zinc ion at its base, coordinated with the cysteine and histidine residues in a tetrahedral arrangement. They speculated that each of these fingers recognized and bound to a DNA sequence, in this case, within the transcription control element of the 5 S RNA gene, although they held out the possibility that similar zinc fingers might be found in many different DNA-binding proteins with distinct DNA ligands. This soon proved to be case, with zinc fingers identified in numerous protein families widely distributed across eukaryotic species.

    Confirmation of Klug’s model came in 1991 with the publication by Drs. Carl Pabo and Nikola Pavletich at Johns Hopkins University School of Medicine of an X-ray crystallographic structure of three zinc fingers from the mouse Zif268 protein bound to a DNA ligand (Pavletich and Pabo, 1991). They found that each zinc finger recognized a specific sequence of three DNA base pairs, with key contacts occurring across one of the DNA strands in the double helix. The array of three consecutive zinc fingers recognized a full nine DNA base pairs in a row, with the protein domains arranged in the opposite orientation as the DNA sequence, i.e., antiparallel binding. Residues in three key positions in a given zinc finger appeared to confer its DNA specificity. In light of this observation, the investigators held out the possibility that it might be possible to identify existing zinc fingers or create new zinc fingers that could recognize any of the 64 possible sequences of three consecutive DNA base pairs. If so, one might be able to mix and match zinc fingers to create a synthetic protein that could attach to any desired DNA sequence of any desired length, as long as it was a multiple of three base pairs. In 1994, after moving to the Massachusetts Institute of Technology, Pabo reported a phage display technique that allowed his team to alter Zif268 zinc fingers so that they could bind and recognize different three-base-pair sequences—an important step toward building the full library of zinc fingers for all 64 sequences (Rebar and Pabo, 1994; Rebar et al., 1996).

    Contemporaneously, several other investigators, including Klug, reported similar techniques to build new zinc fingers. Klug’s group used their technique to build a protein with three zinc fingers that together recognized a specific nine-base-pair sequence in the BCRABL fusion oncogene (Choo et al., 1994). Upon introducing the engineered protein into mouse cells carrying the oncogene, they found that the protein silenced the transcription of the gene and inhibited the unregulated growth of the cells. Several years later, the team of Dr. Carlos F. Barbas III at the Scripps Research Institute was able to assemble a protein with six zinc fingers that together recognized an 18-base-pair sequence in the 5' untranslated region of the erbB-2/HER-2 proto-oncogene; upon fusion of the zinc fingers to a transcriptional activator domain or repressor domain, they were able to controllably modulate gene expression (Beerli et al., 1998).

    At around the same time, Dr. Srinivasan Chandrasegaran at John Hopkins University was studying the bacterial restriction enzyme Fok I, which had the property of recognizing a particular DNA sequence and producing a double-strand break at the site. In 1992, Chandrasegaran’s group reported that different domains of the Fok I enzyme were responsible for the DNA sequence recognition versus the DNA cleavage (Li et al., 1992). They reasoned that they could take advantageous of this modularity by linking the Fok I cleavage domain to different DNA-binding domains. Aware of the ongoing work with zinc fingers, they set out to create a totally new kind of restriction enzyme by fusing three zinc fingers to the Fok I cleavage domain. They made two such proteins, each recognizing a different DNA sequence, and they found that each of the proteins could cut synthetic versions of its target DNA sequence in vitro. They reported their work on these two proteins—the very first zinc-finger nucleases (ZFNs)—four years later (Kim et al., 1996).

    In parallel with these scientific developments, other investigators were investigating how cells repair double-strand breaks in DNA. Dr. Maria Jasin at the Sloan Kettering Institute in New York City had been working with the yeast enzyme I-SceI, which recognizes a specific 18-base-pair DNA sequence and creates a double-strand break in that sequence. Jasin’s team inserted this specific DNA sequence into the genome of mouse cells. In what arguably represented the first instance of deliberate mammalian genome editing, they were then able to introduce the I-SceI enzyme into the modified cells and cause double-strand breaks in the target sequence. The idea was to add a synthetic DNA matching the sequence around the I-SceI target site, with the hope of causing homologous recombination in some of the cells. In 1994, they reported that when they used I-SceI and the synthetic DNA together, there was a more than 100-fold increase in the rate of homologous recombination compared to the use of the synthetic DNA alone (Rouet et al., 1994). This showed the power of using a DNA-cleaving enzyme to introduce a double-strand break to kick-start homologous recombination, rather than hoping the necessary double-strand break would fortuitously happen on its own. This repair mode for intentional double-strand DNA breaks resulting from genome editing is commonly referred to as homology-directed repair (HDR) (Fig. 1.1).

    Figure 1.1 Cellular repair mechanisms for double-strand breaks and their outcomes. Upon generation of a double-strand break, potential repair mechanisms include canonical non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homology-directed repair (HDR), which have different editing outcomes.

    Yet Jasin’s group saw that a second repair mode was simultaneous occurring in the mouse cells targeted with I-SceI. In some cases, the two free ends of DNA generated by the double-strand break were being rejoined in a non-homologous fashion. This entailed either the simple ligation of the two free ends—known as canonical non-homologous end joining (NHEJ)—or end-joining mechanisms involving short sequence homologies that aligned the free ends—known as microhomology-mediated end joining (MMEJ) or alternative non-homologous end joining (Alt-NHEJ) (Fig. 1.1).

    Compared to HDR, NHEJ is a relatively crude repair mechanism, since there is no involvement of any DNA sequence as a repair template. Nonetheless, the original pre-cleavage sequence is usually restored. Occasionally an error occurs with NHEJ, typically a small insertion or deletion (indel mutation) of one or a few base pairs. In contrast, MMEJ often leads to larger deletions driven by homologies of several base pairs surrounding the site of the double-strand break. By its nature, MMEJ is not a random process but a semi-random process, resulting in stereotyped errors as a result of genome editing.

    The 1994 report from Jasin’s group provided evidence of both NHEJ and MMEJ at work at the I-SceI cleavage site in the genome (Rouet et al., 1994). There were deletions as small as one base pair and as large as 52 base pairs. In one case, there was an insertion of one extra base pair into the sequence. The smallest indels did not appear to be driven by homologies and thus were suggestive of NHEJ. Larger deletions were clearly defined by microhomologies—examples included a nine-base-pair deletion coinciding with GGGA sequences flanking the cleavage site and exactly nine base pairs apart, and a 22-base-pair deletion coinciding with homologous GTAC sequences.

    Jasin’s team showed that HDR, NHEJ, and MMEJ were all being used to heal DNA breaks in the genome in the I-SceI-treated cells—competing, side-by-side. This phenomenon foreshadowed one of the principle challenges of genome editing—the difficulty in controlling the repair outcome after the deliberate introduction of a DNA double-strand break with a genome-editing tool. If the goal is to simply mutagenize a genomic site, with no particular preference for the exact mutations that result, then NHEJ and MMEJ are well suited to the task. No synthetic DNA repair template is necessary, since it is required only for HDR. But if the goal is to produce a precise change at the genomic site, it typically requires HDR and the obligate repair template bearing the desired change. Unfortunately, there is no straightforward way to turn off NHEJ and MMEJ so as to predominantly favor HDR. There will be some proportion of cells that will acquire semi-random indel mutations at the target site rather than the desired change—possibly a very large proportion of cells accruing unwanted changes. Even worse, HDR is only active in cells during late S phase and G2 phase, when there is double the usual DNA content and, thus, a redundant copy of every genomic sequence on a sister chromatid that can serve as a natural template for accurate HDR repair of a double-strand break. Since late S phase and G2 phase typically occur only in proliferating cells, precise HDR outcomes are challenging to achieve in non-proliferating cells.

    The convergence of three different lines of work—the discovery of zinc fingers and the creation of zinc finger libraries, the fusing of zinc fingers with the DNA cleavage domain of the Fok I enzyme, and an understanding of the outcomes of the cell’s attempts to repair double-strand breaks in genomic DNA—ushered in the modern era of genome editing. Dr. Dana Carroll at the University of Utah was the first to achieve the editing of an endogenous gene in the genome of cells, reported in 2002. These cells were within living Drosophila fruit flies at the larval stage, and the targeted gene, yellow, influences body color (Bibikova et al., 2002). Carroll’s group used custom-made ZFNs to make double-strand DNA breaks in the gene, which were then repaired by NHEJ (and MMEJ, but for the purposes of discussion, I will hereafter collectively refer to canonical NHEJ and MMEJ as NHEJ). As expected, the resulting indels in yellow in some of the flies were semi-random in size, but they all prevented the gene from working properly and changed the body color.

    In 2003, Carroll’s team followed up on their work with a similar study—also in Drosophila, targeting the yellow gene—that demonstrated HDR when a synthetic DNA matching the gene’s sequence was introduced into the cells (Bibikova et al., 2003). Simultaneously, Drs. Matthew Porteus and David Baltimore at the California Institute of Technology reported genome editing with ZFNs in human cells, though they targeted a foreign sequence that had been inserted into the human genome, rather than targeting an endogenous gene (Porteus and Baltimore, 2003). ZFN editing of an endogenous gene in human cells was not reported until 2005, achieved by a team at Sangamo BioSciences (now Sangamo Therapeutics), a company founded a decade earlier with the goal of developing zinc finger technology for commercial and therapeutic uses. The Sangamo team used HDR to introduce a precise change in IL2RG, mutations of which cause X-linked severe combined immune deficiency, at up to 20% efficiency in human cells (Urnov et al., 2005). (It was this paper that introduced the term genome editing into the scientific lexicon.) Having developed a versatile and proprietary ZFN platform, Sangamo teamed up with Sigma-Aldrich Corporation to provide researchers with custom-made ZFNs that could target any desired DNA sequence in just six to eight weeks, at a price of US$25,000 to US$35,000.

    A group of academic laboratories responded by forming the Zinc Finger Consortium. Led by Dr. J. Keith Joung at Massachusetts General Hospital and including Dr. Daniel Voytas at Iowa State University and Porteus, among others, the Zinc Finger Consortium published a platform that allowed rapid production and testing of custom-made ZFNs (Wright et al., 2006). The platform was freely available to any interested academic researchers. The hope was to make the assembly of zinc fingers highly modular and, as a result, very straightforward and inexpensive. But the Consortium ran into a major problem. It turned out that simply taking a series of zinc fingers that recognized consecutive three-base-pair DNA sequences and fusing them together usually did not yield working ZFNs. ZFN assembly was not so modular, after all, and there appeared to be a poorly understood context dependence that determined whether or not adjacent zinc fingers could act as a unit to effectively bind an extended DNA sequence.

    The solution was to develop whole libraries of zinc fingers that could bind to any given three-base-pair DNA sequence. This provided flexibility in mixing and matching zinc fingers and allowed for testing of a huge number of combinations of zinc fingers, with large-scale screening to empirically determine which one worked best to bind an extended DNA sequence. In 2008, the Zinc Finger Consortium published a new library-based, combinatorial platform called OPEN that allowed academic researchers to screen through thousands of possible ZFNs and find the ones with the most activity (Maeder et al., 2008). Soon thereafter, Joung, Porteus, and Dr. Linzhao Cheng at Johns Hopkins University School of Medicine used the OPEN platform to create ZFNs that improved the efficiency of homologous recombination in human embryonic stem cells and human induced pluripotent stem cells by 200-fold to 1400-fold (Zou et al., 2009), a notable achievement given that these cell types had previously proven resistant to genetic modification. It opened the door to disease modeling with human cells. Shortly thereafter, working with Sangamo scientists, Dr. Rudolf Jaenisch at Whitehead Institute for Biomedical Research published a similar study documenting the ability of ZFNs to efficiently modify genomic loci in human stem cells (Hockemeyer et al., 2009).

    By this time, ZFNs were typically used as pairs, with one protein positioned to recognize a sequence on the sense strand, the other protein a sequence on the antisense strand (Fig. 1.2). In order to achieve a full double-strand break, the Fok I cleavage domains on the two ZFNs needed to assemble into a dimer. This required the ZFNs to be appropriately positioned in tandem along the double helix, flanking the intended target site with optimal spacing between them, such that the Fok I cleavage domains could properly dimerize and cut both DNA strands at the target site. Because of the nature of the cleavage domains, the post-cleavage free ends of DNA each have four-base-pair 5' overhangs, a property that influences the outcomes of attempted genome editing with ZFNs (e.g., the range of indels that result from

    Enjoying the preview?
    Page 1 of 1