Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Photonics in Switching
Photonics in Switching
Photonics in Switching
Ebook548 pages5 hours

Photonics in Switching

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Photonics in Switching provides a broad, balanced overview of the use of optics or photonics in switching, from materials and devices to system architecture. The chapters, each written by an expert in the field, survey the key technologies, setting them in context and highlighting their benefits and possible applications. This book is a valuable resource for those working in the communications industry, either at the professional or student level, who do not have extensive background knowledge or the underlying physics of the technology.
LanguageEnglish
Release dateDec 2, 2012
ISBN9780080924748
Photonics in Switching

Related to Photonics in Switching

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for Photonics in Switching

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Photonics in Switching - John E. Midwinter

    14853

    Preface

    The staggering success of optical fiber as a new information transmission medium has led many to assume that optics or photonics could be similarly successful in telecommunications switching. This two-volume book is an attempt to bring together many of the ideas that have been proposed in the field and set them into some general context. What emerges is that photonics is in no sense a simple replacement technology for electronics in switching and that any attempt to use photonics in routing requires a cool look at what is good and bad about optics.

    In general, one finds that the technologies of electronics and optics have orthogonal properties, the direct result of the fact that electrons interact strongly and photons do not. This basic fact makes the implementation of optical logic difficult and perhaps self-defeating. However, the attempts to use optoelectronic devices, such as SEEDs, where photons interact with electrons, which in turn provide logical processing and return their results to the optical domain, provides an interesting area of study. Other areas of study involve the attempt to use the massive bandwidth or spectrum-space in optical transmission media to ease or solve routing problems. For example, studies of WDM techniques look very promising. Then, at the apparently mundane level, we find the use of optical interconnects within otherwise electronic switching processors to be a growing use for photonics in switching.

    Consequently, we have deliberately chosen the title of this book to be Photonics in Switching rather than Photonic Switching since we believe there is no realistic possibility of optics entirely replacing electronics in switching as it has effectively done in transmission. Having disposed of that block to thinking and discussion, we can discuss quite openly and freely what advantages photonics can offer and how it can be exploited. Within the constraints inevitably imposed by a multi-author volume, we have attempted to do this, and we hope that the reader will find this book to be a useful compendium of recent results and ideas as well as a source of new ideas.

    The book is aimed at the professional engineer working in the communications industry who wants to know more about photonics in switching. Vast amounts of background knowledge should not be required because this book is not so much a treatise on the underlying physics of the technology but more a discussion of today’s experimental reality. We hope that its publication will bring a little nearer the day when optical technology plays a full role in routing as well as transmission.

    Chapter 1

    Why Consider Photonics in Switching?

    John E. Midwinter    University College London, England

    1.1 Introduction

    No one can doubt that there is great interest today in the possibility of using photonics technology in information switching or routing systems. To confirm this claim, one only has to look at conferences on optical communications or photonic switching or look in the learned journals to discover a wealth of evidence showing that many workers in R&D facilities the world over are studying this subject. To find out how this has come about, we need to look back in time about 10 years and trace the way ideas and problems have developed in a number of different fields of activity to establish where this new subject has emerged from and why.

    Broadly speaking, we must examine trends in at least five different fields : optical communications, telecommunications networks and services, telecommunications switching, high-complexity and high-speed electronics, and, last but not least, optical computing as shown in Fig. 1.1. We will examine how each has contributed in turn to the development of photonics technology.

    Fig. 1.1 The ingredients behind photonics in switching.

    1.2 The Background Ingredients

    1.2.1 Optical Communications

    The role of optical communication is probably, for most people, the most obvious. By the start of the 1980s, optical fiber was emerging as the supreme transmission medium for applications as diverse as intercity and transoceanic communication as well as metropolitan and local area networks (MANs and LANs). The combination of very low signal distortion and attenuation, small size, light weight, and ruggedness coupled with low cost swept other cable media out of contention in one application after another, a trend that continues to this day. Photonics had arrived in telecommunications and, in so doing, had hastened another revolution that was already brewing, namely, the digital revolution, with the introduction of fully digital transmission and switching networks carrying services in digital form from sender to receiver. But optical communications did something else. It provided long-haul transmission bandwidth at rapidly reduced prices, in the process encouraging thoughts of higher bandwidth services than the standard ISDN ones, perhaps using switched transmission blocks of 2 Mbit/s or higher.

    Despite this early excitement, the real potential for extending the capacity of existing optical transmission systems is only now being recognized as optical amplifiers become a reality. One begins to sense that the day may not be far off when regenerators on existing fiber cable routes are replaced by simple optical amplifiers, yielding in place of 140 or 560 Mbit/s circuits, data transparent pipes between cities. This promises to allow every installed cable to carry 10 or 100 times the traffic it was originally designed for by making extensive use of dense wavelength-division multiplexing, the use of multiple-wavelength sources and receivers attached to each fiber. The result of this revolution will be a massive expansion in transmission capacity at very low marginal cost. However, and perhaps more significantly, for the first time this will be accomplished by a means other than using higher levels of time multiplexing (e.g., bit rates) because increasingly the terminal electronics provides a limitation. This limitation is now driving the next phase of transmission development, but also seems likely to force major changes elsewhere in the total network. The staggering rate of progress of optical communications is summarized in Figs. 1.2. and 1.3. Figure 1.2. shows approximate figures for the achieved values (often in the laboratory) of the repeater spacing versus bit rate for the various technological advances of the last decade.

    Fig. 1.2 Typical unrepeatered range for fiber systems versus bit rate as a result of the major technological developments during the 1980s.

    Fig. 1.3 The approximate attenuation of 50 km of single-mode fiber superimposed on the gain spectrum of a single EDFA. Note that a zero-attenuation window of about 6000-GHz spectral width is obtained, about 20 times the whole radio and microwave spectrum.

    To appreciate the astounding implication of repeaterless transmission systems using erbium-doped fiber amplifiers (EDFAs) to replace repeaters on inland routes, one has only to examine Fig. 1.3, which shows the zero-attenuation spectral window achieved by combining an amplifier with about 50 km of fiber. The time frame within which the most exciting of these developments will become engineered reality is hard to estimate but it seems likely to be not more than about five years from the time of this writing, around 1998.

    In parallel with this, we have seen subjects such as planar integrated optics developing to demonstrate, first, efficient single electrically controlled 2 × 2 crosspoints and later complex arrays of such devices, up to about 16 × 16. Most recently, large numbers of such arrays have been used in the laboratory to demonstrate 128 × 128 nonblocking guided-single-mode switch arrays. Other optical technologies have been used to produce crosspoint arrays such as arrays of electrically controlled semiconductor laser amplifiers. These all provide space switches whose setup time is likely to be slow by comparison to the bit interval that they can pass since many of them can pass transmission bandwidths that are massive by any electrical standards.

    An alternative approach to routing, with no real analogue in the electrical domain, is to use the wavelength dimension to make multiterminal networks in which routing is carried out by associating each source or destination with a unique or assignable wavelength. Such networks act as powerful switches in their own right. In particular, this technology seems more readily scaled to larger dimensions than the guided-wave space switches, which run into increasing problems of insertion loss and crosstalk as their size increases. However, it would be fair to say that the jury is still out on this issue.

    In each case, the optical community has pointed to the massive data throughput that such subsystems could offer (e.g., the data bandwidth available per optical carrier or channel) but has generally been less clear about precisely how this might fit into a real network and how it would be controlled. Scaling any of these proposed switches to 256 × 256 dimension, for example, represents a major feat of optoelectronic engineering and, to this author’s knowledge, has not actually been achieved yet.

    There has also been much publicity given recently to some impressive demonstrations of the technology proposed originally for optical (digital) computers (see Section 1.2.4) as the basis for complex synchronously clocked, largely optical digital switches, but, again, it remains to be seen whether they can cross the potential barrier presented to any new technology and establish themselves as viable cost-effective solutions in a real application. This technology is designed to exploit the ability of relatively simple imaging systems (lenses, holograms, etc.) to handle in parallel through free-space very large numbers of discrete channels (e.g., imaging an array of 30 × 30 spots or discrete data channels is not particularly difficult).

    All of these ingredients have encouraged interest in photonics in switching, but it is probably, above all else, the overwhelming success of optical communications that developed the desire to increase our study of photonics in switching. However, without a need, it would not have happened with such speed and it was for this reason that we place telecommunications networks and services second in our discussion.

    1.2.2 Telecommunications Networks and Services

    The digital revolution, starting with the planned introduction of the Integrated Services Digital Network (ISDN) and its basic building blocks of the 2B + D channel of 144 kbit/s (= 2 × 64 + 16 kbit/s) per termination, which in turn generated tremendous pressure for new digital transmission media and switching systems. Initially, this development was based on electronics and electrical transmission media (coaxial cable or microwave relay) and could be implemented without photonics. However, as optical fiber became more pervasive, the cost of long-distance digital services fell and this in turn led to rapid growth in demand, which started to place pressure on the switching fabrics.

    In parallel with this, the sheer flexibility and power of digital (electronic) technology encouraged the development and deployment of new services, such as automatic call redirection, automatic long-distance routing for the lowest cost through competing networks, etc., all of which dramatically increased the control complexity just as the underlying data flows were also increasing rapidly. Moreover, whilst the past had been dominated by POTs (plain old telephones!), the availability of cheap transmission bandwidth encouraged planners to envisage a much greater mix of services in the future, such as videophone, super facsimile, entertainment, and high-definition TV, so generating the Broadband ISDN concept (B-ISDN) much favored in Europe and Japan. The greater flexibility of digital systems also encouraged the belief that in the future, services such as telex and packet data should be carried on a single network rather than on separate overlays within the overall network. Some of these developments are summarized in Fig. 1.4, which also draws attention to the astonishing spread in message size as well as potential bit rate for new services.

    Fig. 1.4 Plot of typical data rates versus call connection time for a variety of different services now used or being discussed, showing the astonishingly large spread of message length!

    These developments in networks and services stimulated a great debate during the 1980s on how to mix such apparently diverse traffic on a single network to achieve the greatest flexibility. The objective was to move toward the single telecommunications service outlet socket on the wall into which any service could be plugged and instantly operate with any other in the network, regardless of bandwidth or data format required (of course, subject to the restriction that a suitable terminal be present at the other end). It is thus axiomatic that in addition to providing bandwidth, much intelligence is also implied because the communicating terminals must establish each other’s nature and preferred language before conversing!

    A major step toward this objective was the establishment of the SONET (Synchronous Optical NETwork) protocol in the United States and a closely related one, STM (Synchronous Transmission Mode), within CCITT. Both envisage a time-divided transmission highway rather similar to those used in the time-division multiplex (TDM) regime, but with some fundamental differences on how the available bandwidth would be allocated. In the simplest form, the underlying approach to both is to assume that all services will be packetized so the user wishing to send a message, whether it’s speech, videophone, or whatever, will break the message into standard size packets each with an electronic address attached and post them at a rate appropriate to the demands of the service. Thus one would envisage a videophone posting packets at a greater rate than an ordinary phone but both at a relatively steady rate, while a computer would more likely converse with another machine through bursts of packets interspersed with relatively long periods of thinking.

    Recognizing this development brings out a major new element in the problems faced by the designers of switching systems. Not only will the raw data rates emerging from optical fibers be escalating continually, but the nature of the traffic they carry will most probably be changing also. The standard TDM service allocated time slots (bytes) on a wideband highway in a regularly repeated fashion that varied slowly, so that the control electronics had good warning that two parties wanted to converse, allocated the appropriate time slot 8000 times per second throughout the route, set up the control data to connect them at the nodes, and only then established the connection, which remained for the duration of the conversation, which would typically be several minutes. In this new regime, the switch takes each arriving packet (time slot) on its own, reads the destination code, and only then starts to work out where it should go and how. The result is a massive increase in the control load per Mbit/s of traffic just at the time that the number of Mbit/s is also escalating rapidly. Taken together, these developments lead to major new problems for switching system designers and the technology they must use.

    Before leaving this subject, a few numbers may be of value to the reader who is unfamiliar with telecommunications traffic statistics. As we stated above, a digital ISDN channel is composed of two blocks of 64 kbit/s plus one of 16 kbit/s. The 64 kbit/s channel is composed of 8000 samples/second, each of 1 byte (8 bits) made up of 7 data bits plus 1 control bit. These channels are then time multiplexed together to form a heirarchy of multiplex levels. We list these for Europe in Table I. In North America and Japan the details are different but follow the same general pattern.

    Table I

    The European Time-Division Multiplex Hierarchy

    The ATM packet is defined as 53 bytes composed of 48 bytes of data and a 5-byte header, which contains the destination and sender addresses. The SONET protocol proposes that the transmission be assembled using frames that can be envisaged as two-dimensional assemblies of message and control information. The STS-1 frame, for example, contains nine rows each of 90 bytes of which 87 bytes in each row are available for messages. A data rate of 51.84 Mbit/s is generated by 8000 such frames per second, while the next level up (STS-3) corresponds to three times that rate and 155.52 Mbit/s. Higher levels are also envisaged reaching up to multi-gigabit-per-second rates. This is discussed in greater detail in Chapter 3 in Section 3.8.

    1.2.3 Telecommunications Switching

    The interest of switching system designers stems largely from the realization that they face this rapid growth of both message rate and control complexity and from the evident success of photonics in revolutionizing transmission. It is much less obvious to the writer that it stems from any obvious capability on the part of photonics to actually solve these problems, but perhaps comes from a blind faith that if optics is so good for transmission, it must be able to do something for switching. The challenge is to identify what! Perhaps the outcome will be that both fields will change.

    To an outsider, the (linear) projections for the complexity of future big switching systems and the scale of the expected software problems leave one with the overwhelming feeling that linear extrapolation is clearly not the way forward. Evidence is now growing to show that the slow realization of what optics can realistically do, both good and bad, may lead to a major reappraisal of how networks as a whole are constructed and operated. What the switching community has begun to do is to bring home to the optical R&D workers the sheer complexity of the problems they are currently dealing with, as outlined in greater detail in the next chapter, and this has bred a certain modesty among the switching opticians that is often less evident among their brothers in computing where similar meetings between today’s electronic experts and the optics community seem less frequent!

    1.2.4 Optical Computing

    The field of optical computing can be traced back to the early 1960s or even earlier when the discovery of the laser generated massive interest on the part of the military in the possibility of exploiting the Fourier transform relationships inherent in coherent optical imaging systems to carry out operations such as convolution and correlation. It was quickly shown that these operations could be carried out with great speed on data presented in imaging optical format to a bulk optical system. Such processors were inherently analog in operation and as such always ran into difficulty maintaining sufficient dynamic range and signal-to-noise (S/N) ratio, limiting their usefulness. History seems to show that despite many very impressive demonstrations, silicon digital electronic processing has almost always been selected for the final production equipment. Nevertheless, high levels of interest continue to this day and it is undeniably true that such specialized machines can achieve very high equivalent digital computation rates.

    The interest was further boosted in the early 1980s by the use of nonlinear optical devices to demonstrate the basic digital processing functions of AND, OR, NAND, NOR, etc., again in imaging optical format whereby a single lens could, in principle, image a very large number of a parallel channels from a 2-D array of devices. This led to claims that future high-speed computers would exploit massively parallel digital-optical processing to achieve speeds far in excess of what is possible using electronics. A number of major R&D programs followed that were aimed at exploiting this opportunity. Such claims are based on simple assumptions of potential digital throughput, but conveniently ignore the huge practical problems inherent in their implementation. For example, virtually all optical logic devices discussed in the literature employ threshold logic, implying extremely tight control of the optical power level throughout a complex multichannel system since it is the analog optical signal level that signifies the digital state. One can reasonably question whether this has any likelihood of realization given that unexpected 3-dB insertion losses are very easily gained in complex systems of mirrors, holograms, lenses, etc. However, it should be pointed out that the symmetric-SEED approach developed at AT&T Bell Labs and discussed elsewhere, does provide an elegant solution to this problem by using a dual-rail optical signaling system. Other serious objections center around the fact that the best optical logic devices turn out to be optically triggered electronic devices and large ones at that, since light has to get into them and optical wave-lengths are quite large by electronic standards. Other simple issues conveniently overlooked by many center around the practical problems of assembling high-resolution imaging optical systems, the shallow depth of field, the precision implied in the lenses (in terms of focal length), and the mechanical tolerances and large dimensions of assembled systems. However, perhaps one should say that none of these drawbacks proves it cannot be done, but only increases the difficulty and, as some of the results discussed later in this book will show, truly impressive experimental systems based on free-space optics can certainly be assembled in the laboratory.

    One other embarrassing problem facing the designer of the all-optical synchronous switch is that, for most practical purposes, there is no such thing as optical memory in a form that might serve as a FIFO buffer. Archival memories are at an advanced stage of development based on optically written and read disks, as developments of the well-known CD-ROM-type system indicate, but these do not seem appropriate. The alternative is optical fiber used as a delay line memory (5 ns/m delay) and perhaps switched with 2 × 2 couplers in a binary length sequence to establish variable delay. However, while the latter might be used for some purposes, such as approximate alignment of SONET frames at the entrance to an optical switch, it is more difficult to see it replacing electronic buffers for a more complex switching situation. The question then immediately arises, if one has gone electronic for one’s buffer memory, why not just stay electronic!

    Turning to more commercial issues, one should also note that unlike the field of photonic switching and the telecommunications industry, the computing companies have shown remarkably little interest in optical computing and the field has been left almost exclusively to defense-funded research workers in universities and defense contractors whose major interest is research rather than building computers. For whatever reason, there seems to have been much less hard-headed discussion and evaluation of the subject compared to photonics in switching, but despite this, it has undoubtedly provided more input to our subject. After all, if photonics could build a world-beating digital computer, then surely it must also be capable of building a world-beating digital switch since the technologies have so much in common. Unfortunately, it may be that photonics cannot build such a computer in the absence of electronics, so we should tread with care.

    What is abundantly clear is that the debate stimulated by digital all- optical computing studies has focused much attention on the shortcomings of both optical and electronic logic and at the same time has highlighted the free-space and guided-wave interconnect capability of optics. Moreover, the great interest in free-space imaging interconnect inherent in the optical computing studies has stimulated intense interest in the potential for integrating free-space imaging optical pin-out capability between arrays of optoelectronic I/O elements. But as with any such new development, the packaging problems inherent in its implementation look formidable and remain largely unsolved at present. Beyond that, many possibilities remain to be quantified fully, but above all else, the major switching problems remain unresolved.

    1.2.5 Advanced Electronics

    Finally, we come to the undisputed king of information processing, electronics. No one with any sensory perception can fail to recognize that it is electronic, not optical, technology that dominates information processing today so that any serious proposal to replace electronics with an alternative must take a very long hard look at what electronics has to offer. This is usually a daunting experience (to an optician) but not entirely unrewarding. For example, it rapidly becomes clear that while electronics is tremendously impressive in the processing role, it does have severe communication limitations. Essentially all logical information processing today relies on the interaction of electrons, whether in silicon electronics or in nonlinear optical materials, since for practical purposes, photons do not interact. Hence, in a very fundamental sense, electrons are custom designed for interaction (logic) while photons are ideal for information transmission. The best optical logic gates are almost all electronic in operation and are simply optically triggered electronic devices. However, when information transmission is considered, electrons rapidly show their less helpful side. Charge storage in capacitors, charge interactions (crosstalk), and charge loss or attenuation (resistance), not to mention such mundane issues as the real estate requirements for laying down conductors, all serve to constrain the electronic circuit designer. Photons may offer a way around these problems, although they offer it at a price.

    Electronic bandwidth is also very finite and shrinks into insignificance when set alongside the optical carrier frequency of order 10¹⁵ Hz so that optics offers a new dimension to the communication engineer—almost unlimited spectrum. The challenges that emerge from a serious look at today’s problems are more subtle than simply deciding how to build a super-optical 5-ESS within 10 years. The problem is much broader and is really concerned with the issue of how to exploit all that photonics technology undoubtedly offers, how to mix it with all that electronics technology offers, and how to meld these in an optimum manner to provide the most powerful and flexible communications networks in the future. Optical transmission was simple by comparison. All that was necessary was to build plug-in replacement transmission systems fitting into well-defined interfaces (e.g., 140 or 280 Mbit/s). The problem we now face has many more dimensions to its solution, literally and figuratively, and the ground rules for all are changing

    Enjoying the preview?
    Page 1 of 1