Professional Documents
Culture Documents
Manual PDF
Manual PDF
System Description
Software Release 2.0
009-3209-000 - Revision A
December 16, 2011
Copyright© 2011 Ciena® Corporation. All Rights Reserved.
Ciena® cannot be responsible for unauthorized use of equipment and will not make allowance or credit for
unauthorized use or access.
Contacting Ciena
For additional office locations and phone numbers, please visit the Ciena web site at www.ciena.com.
BY INSTALLING OR USING THE EQUIPMENT, YOU ACKNOWLEDGE THAT YOU HAVE READ THIS
AGREEMENT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
1. Right to Use License; Restrictions. Subject to these terms, and the payment of all applicable license fees, Ciena
grants to you, as end user, a non-exclusive license to use the Ciena software excluding open source components (the
"Software") in object code form solely in connection with, and as embedded within, the Equipment,. You shall have the
right to use the Software solely for your own internal use and benefit. You may make one copy of the Software and
documentation solely for backup and archival purpose, however you must reproduce and affix all copyright and other
proprietary rights notices that appear in or on the original. You may not, without Ciena's prior written consent, (i)
sublicense, assign, sell, rent, lend, lease, transfer or otherwise distribute the Software; (ii) grant any rights in the
Software or documentation not expressly authorized herein; (iii) modify the Software nor provide any third person the
means to do the same; (iv) create derivative works, translate, disassemble, recompile, reverse engineer or attempt to
obtain the source code of the Software in any way; or (v) alter, destroy, or otherwise remove any proprietary notices or
labels on or embedded within the Software or documentation. You acknowledge that this license is subject to Section
365 of the U.S. Bankruptcy Code and requires Ciena's consent to any assignment related to a bankruptcy proceeding.
Sole title to the Software and documentation, to any derivative works, and to any associated patents and copyrights,
remains with Ciena or its licensors. Ciena reserves to itself and its licensors all rights in the Software and
documentation not expressly granted to you. You shall preserve intact any notice of copyright, trademark, logo, legend
or other notice of ownership from any original or copies of the Software or documentation. Ciena does not place any
restrictions on the open source components that may be distributed along with the Software. Any applicable open
source licenses will be distributed to recipient separately.
2. Audit: Upon Ciena's reasonable request, but not more frequently than annually without reasonable cause, you shall
permit Ciena to audit the use of the Software at such times as may be mutually agreed upon to ensure compliance with
this Agreement.
3. Confidentiality. You agree that you will receive confidential or proprietary information ("Confidential Information") in
connection with the purchase, deployment and use of the Equipment. You will not disclose Confidential Information to
any third party without prior written consent of Ciena, will use it only for purposes for which it was disclosed, use your
best efforts to prevent and protect the contents of the Software from unauthorized disclosure or use, and must treat it
with the same degree of care as you do your own similar information, but with no less than reasonable care. You
acknowledge that the design and structure of the Software constitute trade secrets and/or copyrighted materials of
Ciena and agree that the Equipment is Confidential Information for purposes of this Agreement.
4. U.S. Government Use. The Software is provided to the Government only with restricted rights and limited rights.
Use, duplication, or disclosure by the Government is subject to restrictions set forth in FAR Sections 52-227-14 and
52-227-19 or DFARS Section 52.227-7013(C)(1)(ii), as applicable.The Equipment and any accompanying technical
data (collectively "Materials") are commercial within the meaning of applicable Federal acquisition regulations. These
Materials were developed fully at private expense. U.S. Government use of the Materials is restricted by this
Agreement, and all other U.S. Government use is prohibited. In accordance with FAR 12.212 and DFAR Supplement
227.7202, software delivered to you is commercial computer software and the use of that software is further restricted
by this Agreement.
5. Term of License. This license is effective until terminated. Customer may terminate this license at any time by
giving written notice to Ciena and destroying or erasing all copies of Software including any documentation. Ciena may
terminate this Agreement and your license to the Software immediately by giving you written notice of termination in
the event that either (i) you breach any term or condition of this Agreement or (ii) you are wound up other than
voluntarily for the purposes of amalgamation or reorganization, have a receiver appointed or enter into liquidation or
bankruptcy or analogous process in your home country. Termination shall be without prejudice to any other rights or
remedies Ciena may have. In the event of any termination Customer will have no right to keep or use the Software or
any copy of the Software for any purpose and Customer shall destroy and erase all copies of such Software in its
possession or control, and forward written certification to Ciena that all such copies of Software have been destroyed
or erased.
7. Limitation of Liability. ANY LIABILITY OF Ciena SHALL BE LIMITED IN THE AGGREGATE TO THE AMOUNTS
PAID BY YOU FOR THE SOFTWARE. THIS LIMITATION APPLIES TO ALL CAUSES OF ACTION, INCLUDING
WITHOUT LIMITATION BREACH OF CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY,
MISREPRESENTATION AND OTHER TORTS. THE LIMITATIONS OF LIABILITY DESCRIBED IN THIS SECTION
ALSO APPLY TO ANY THIRD-PARTY SUPPLIER OF Ciena. NEITHER Ciena NOR ANY OF ITS THIRD-PARTY
SUPPLIERS SHALL BE LIABLE FOR ANY INJURY, LOSS OR DAMAGE, WHETHER INDIRECT, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL INCLUDING WITHOUT LIMITATION ANY LOST PROFITS, CONTRACTS,
DATA OR PROGRAMS, AND THE COST OF RECOVERING SUCH DATA OR PROGRAMS, EVEN IF INFORMED
OF THE POSSIBILITY OF SUCH DAMAGES IN ADVANCE.
8. General. Ciena may assign this Agreement to any Ciena affiliate or to a purchaser of the intellectual property rights
in the Software, but otherwise neither this Agreement nor any rights hereunder may be assigned nor duties delegated
by either party, and any attempt to do so will be void.This Agreement shall be governed by the laws of the State of
Maryland (without regard to the conflict of laws provisions) and shall be enforceable in the courts of Maryland. The
U.N. Convention on Contracts for the International Sale of Goods shall not apply hereto. This Agreement constitutes
the complete and exclusive statement of agreement between the parties relating to the license for the Software and
supersedes all proposals, communications, purchase orders, and prior agreements, verbal or written, between the
parties. If any portion hereof is found to be void or unenforceable, the remaining provisions shall remain in full force
and effect. The source code for open source components distributed to an end user is available upon request.
Preface
Overview
This manual provides a description of the Ciena® 5400 Reconfigurable Switching System. The 5400
Switch features automated provisioning, self-discovery, and is fully interoperable with the Ciena
CoreDirector® Family of Multiservice Optical Switches. The 5410 Reconfigurable Switching System,
5430 Reconfigurable Switching System, and the CoreDirector Family of Multiservice Optical
Switches comprise the Ciena Intelligent Optical Switching Network.
Disclaimer
While every effort has been made to ensure that this document is complete and accurate at the time
of printing, the information that it contains is subject to change. Ciena® is not responsible for any
additions to or alterations of the original document. Networks vary widely in their configurations,
topologies, and traffic conditions. This document is intended as a general guide only. It has not been
tested for all possible applications, and it may not be complete or accurate for some situations.
Trademark Acknowledgements
• CoreDirector® FS Multiservice Optical Switch and CoreDirector® FSCI Multiservice
Optical Switch are registered trademarks of Ciena Corporation.
• ON-Center® Network & Service Management Suite is a registered trademark of Ciena
Corporation.
• CoreDirector Designer™ Software Tool (CDD) is a trademark of Ciena Corporation.
• Windows® NT, Windows® XP, and Windows® 7 are registered trademarks of the Microsoft
Corporation.
• UNIX® is a registered trademark licensed exclusively through X/Open Company, Ltd.
• Microsoft® is either a registered trademark or trademark of Microsoft Corporation in the
United States and/or other countries.
• Sun, Sun Microsystems, JAVA, Java Secure Socket Extension, and JAVAX are
trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. or other
countries. This product includes code licensed from RSA Data Security.
Intended Audience
This document is intended for customers, certified system installation technicians, test engineers,
technical support technicians, and other personnel responsible for using the 5400 Switch. All 5400
Switch personnel are required to read, understand, and observe the safety precautions described in
the appropriate product manuals.
Release History
The following information lists the release history of this document.
Related Documentation
All Ciena documentation is available in both hard copy and on CD-ROM. The following list provides
a brief description of the related documents. Additional supporting documentation is available
through the Ciena web site at http://www.ciena.com.
• 5400 Reconfigurable Switching System System Description Manual (009-3209-000)
• 5400 Reconfigurable Switching System 5430 Switch Hardware Installation Manual
(009-3209-001)
• 5400 Reconfigurable Switching System 5410 Switch Hardware Installation Manual
(009-3209-018)
• 5400 Reconfigurable Switching System Turn-up and Test Manual (009-3209-002)
• 5400 Reconfigurable Switching System Alarm and Trouble Clearing Manual
(009-3209-003)
• 5400 Reconfigurable Switching System Service Manual (009-3209-004)
• 5400 Reconfigurable Switching System Node Manager User Guide (009-3209-005)
• 5400 Reconfigurable Switching System TL1 Command Reference (009-3209-006)
• Ciena Standard Cleaning and Equipment Safety Practices (009-2003-121)
• Ciena Installation Workmanship Standards (009-7B03-000)
Document Comments
Ciena appreciates all comments that will help us to improve our documentation quality. The user can
submit comments through the Ciena web site (http://www.ciena.com) or with the Documentation
Improvement Request form included with this document.
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Software Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
5400 Switch Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5400 Switch Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5400 Switch Software & Hardware Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Software Architecture Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Control and Timing Module Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Line Module Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
OTN Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Hardware Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
The 5410 Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
The 5430 Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Line/Control and Timing Module Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Switch Module Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5400 Switch Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Line Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Control and Timing Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Switch Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Data Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Timing Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Timing References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Resource Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
System Shelves and Fan Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5400 Switch Module Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
System Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Termination Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Fault Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Alarm Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Alarm Severities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Alarm Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Alarm Integration and Decay Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Diagnostics and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Tandem Connection Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Circuit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Loopbacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Test Access Port (TAP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Remote TAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Hardware Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Data Plane Fault Isolation (DPFI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Control Plane Fault Isolation (CPFI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Timing Plane Fault Isolation (TPFI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Hold in Reset (HIR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
ACO/Alarm Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Configuration/Inventory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Equipment Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Control Timing Module (CTM) Branding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
How CTM Branding Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Cross Connect Provisioning and Connection Management . . . . . . . . . . . . . . . . . . . . . . . . . . 44
End Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
DCC/GCC Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Protection Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1+1 Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1:N Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Linear Protection Switching Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Virtual Line Switched Ring (VLSR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
VLSR Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Four Fiber Bidirectional Line Switched Ring/Four Fiber - MS-SPRING Protection . . . . . . . . 50
BLSR Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Flexible Cross Connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Arbitrary SNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A-SNCP Over APS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Network Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Line Timing with SSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Synchronization Status Message Translations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Management Gateway Network Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
SONET/SDH IP over DCC and OTN IPoGCC with GNE support . . . . . . . . . . . . . . . . . . 58
OTN IPoGCC with GNE Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Open Shortest Path First (OSPF) Over DCC/GCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
OTU2 Auto-Discovery With The 4200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Performance Monitoring (PM) Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Real-time and Historical Statistics Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
OTN Section Monitor/Tandem Connection Monitor/Path Monitor Layer Statistics . . . . . 70
SONET/SDH Physical/Section/Line Layer, and Path Statistics . . . . . . . . . . . . . . . . . . . . 71
Configuration/Inventory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
OSRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Constraint-Based Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Network (Topology) Autodiscovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
End-to-End Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Reversion and Reversion Timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Max Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Max Admin Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Manual Switch and Regroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Retry Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Multiple Protection Bundle ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Hooks for Global Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
OSRP Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
OSRP Interfaces and Communication Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Homogeneous Out of Band OSRP (OOB OSRP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
SONET/SDH Virtual Channel Homogeneous OOB OSRP . . . . . . . . . . . . . . . . . . . . . . . 86
FastMesh Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Unique FastMesh Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Efficiency and Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Reversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Bandwidth Pre-allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
NE Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Subnetwork Connection Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
UPSR/SNCP Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
UPSR/SNCP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
UPSR/SNCP Automatic Switching Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Facility Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Equipment Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Simple Hubbing Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
UPSR/SNCP Port to a Non-UPSR/SNCP Port (Add/Drop Port) . . . . . . . . . . . . . . . . . . . 92
UPSR/SNCP Port to Another UPSR/SNCP Port on the Same Ring . . . . . . . . . . . . . . . . 93
UPSR/SNCP Port to Another UPSR/SNCP Port on a Different Ring . . . . . . . . . . . . . . . 94
UPSR/SNCP Port to an APS 1+1 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Auto Cross Connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5430 Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5430 Switch Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5430 Switch Fan Tray Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5430 Switch Line/Control and Timing Module Shelves (A and C) . . . . . . . . . . . . . . . . . . 99
5430 Switch Switch Module (SM) Shelf B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5430 Switch PDU Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5430 Switch Power Distribution Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Controls and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Power Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Display Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Controls and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5430 Switch Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Fan Tray Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Input/Output Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5430 Switch I/O Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
CTM Alarm and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Backplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Loyalty Feature Upgrade Software Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Maintenance Upgrade Software Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
GLOSSARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Overview
This chapter provides overview information about the Ciena® 5400 Reconfigurable Switching
System, hereafter referred to as the 5400 Switch.
The 5400 Switch features automated provisioning, self-discovery, and is fully interoperable with the
Ciena CoreDirector® Family of Multiservice Optical Switches. The 5410 Reconfigurable Switching
System, 5430 Reconfigurable Switching System, and the CoreDirector Family of Multiservice
Optical Switches comprise the Ciena Intelligent Optical Switching Network.
The CoreDirector Family of Multiservice Switches are hereinafter referred to collectively as
CoreDirector Switch. Unless otherwise specified, references to a 5400 Switch within this document
refer to interconnected network of 5410 Switches and 5430 Switches.
This chapter contains the following sections:
• About the 5400 Switch on Page 1
• Software Overview on Page 5
• Software Architecture on Page 7
• OTN Support on Page 10
• Hardware Overview on Page 11
• Hardware Architecture on Page 16
• Resource Naming on Page 22
Until smaller diameter fibers are commercially available, engineering guidelines recommend
installing up to four 48 port line modules (LMs) in each LM section:
• 5410 Switch: A-1 - A-5; A-6 - A-10
• 5430 Switch: A-1 - A-7; A-8 - A-15; C-1 - C-7; and C-8 - C-15
The networking intelligence embedded in each 5410 Switch, 5430 Switch, and CoreDirector Switch
enables scalable and real-time, end-to-end provisioning and autodiscovery of network connectivity
and topology. The 5400 Switch software also includes protection and restoration capabilities that
combine the power of point-to-point SNCP protection with the efficiencies and speed of end-to-end
connection-level mesh restoration. The scalability, flexibility, and advanced networking capabilities
of the 5400 Switch dramatically reduce costs associated with deploying, operating, and scaling
optical networks.
Switching Capacity
The 5400 Switch handles Optical Transport Network (OTN), Synchronous Optical Network
(SONET), Synchronous Digital Hierarchy (SDH), and packet traffic. Table 1-1 shows switching
capacity characteristics of the 5410 Switch and 5430 Switch.
Total bandwidth of nonblocking, multiservice cross connects 1.2 Tbps 3.6 Tbps
Total available circuits (ODU0, STS1/VC-3) between ports in 11520 34560
a fully configured system
The 5410 Switch supports all the SONET/SDH and OTN capabilities as they exist on 5430 Switch
platform. These capabilities include provisioning, fault management, performance management and
protection applications. The 5410 Switch is the medium chassis version of the 5400 family. The 5410
Switch supports all the line modules that are supported on 5430.
Management
The 5410 Switch and 5430 Switch can be managed on site or from a central site through fully
redundant DCN interface based on 10/100Mbps Ethernet. The 5400 Switch commands and
responses, network alarm messages, and other management communication can be integrated into
network administration software. A northbound TL1/TCP/IP interface from the 5410 Switch and 5430
Switchallows seamless integration with Telcordia operational support systems, Ciena ON-Center®
Network & Service Management Suite, Ciena Node Manager, TMF814, CLI (node commissioning)
as well as other operational support systems that require direct Transaction Language One (TL1)
interface to network elements.
Software Overview
This section summarizes the features of the operating system software for the 5400 Switch.
Operating System
The 5400 Switch embedded operating software contains all the features and functionality necessary
for the 5410 Switch and 5430 Switch to be used in a variety of network applications. The 5400 Switch
provides the flexibility to support a wide range of network applications by offering multiple protection
and restoration schemes, as well as intelligent, end-to-end service provisioning across multiple
bandwidth sizes, ranging from ODU0 to ODU3 or STS-1/VC-3 to STS-768c/VC-4-256c.
The 5400 Switch software capabilities are available through Base Software Packages (Figure 1-3)
with an optional suite of Intelligent Optical Services software offerings for mesh networks. Only one
type of base package can be ordered for a given system. A Right To Use (RTU) fee is assessed on
a port basis for the use of the software capabilities supported by the base packages, and the
intelligent optical service offerings which are charged for all ports in the system regardless of use.
Base License (per port ) Base Licenses Base Mesh License (per port)
Latency Routing
XCON Security Base Lic. Features
Admin Weight Routing
APS /SNCP Auto- Discovery Or Restoration
LSMR
Multicast DTL & Auto Routing
MR SNCP
Software Architecture
Figure 1-4 shows the logical software architecture for the 5400 Switch.
Element
Equipment SNC Call Processing
Management
OTN, SONET/
Cross-Connect
SDH Ethernet Timing
Control
Interfaces
Intra-Switch
System Integrity Persistent Storage
Communications
5430-09079
The 5400 Switch operating software consists of the following functional blocks:
• Common Object Request Broker Architecture (CORBA) - Provides the software interface to the
element management. CORBA supports all external clients, including the TL1 agent and
ON-Center Suite.
• Equipment management - Automatically discovers all 5410 Switch and 5430 Switch inventory
and provides the associated state information.
• Element management - Provides access to and from external software clients through the
Ethernet and serial ports. Reports events, alarms, and logging to the external clients, as well as
handling configuration management activities.
• Subnetwork Connection (SNC) - Controls automated provisioning of end-to-end connections;
reroutes and tracks working and protection connections for each connection originated.
• Call processing - Manages circuits traversing the 5400 Switch and performs the following 5400
Switch-specific circuit procedures:
• Creates endpoints and cross connects
• Sets up active connections
• Releases connections
• Commits and de-commits connections
• Routing - Discovers network (5400 Switch and CoreDirector Switch) topology, disseminates
network topology information, and computes the shortest path route of a connection through
the 5400 Switch network.
• Signaling - Provides the ability to establish point-to-point connection requests across a network
of 5410 Switches, 5430 Switches, and CoreDirector Switches.
• Facility interfaces - Handles the configuration, fault, and performance monitoring of the facility
lines and trunks in the 5410 Switch and 5430 Switch.
• Timing - Handles the timing reference configuration and automatic reference selection used
throughout the 5400 Switch.
• Cross connect control - Creates and removes port-to-port cross connects and logical
connections, rebuilds connections in case of a control and timing module (CTM) reset or switch
to standby.
• Protection - Provides protection switching control for both external events and internal 5410
Switch and 5430 Switch faults (module faults in a switch module or line module).
• Switch control - Provides basic switch control by controlling the switching fabric hardware. Sets
up and tears down cross connections or switches over a group of connections from one line to
another line during protection events, and is responsible for responding to switch module
failures.
• System integrity - Detects failures and causes a failover switch to the secondary or standby
equipment when the primary equipment fails. Announces failover switches and node-down
conditions.
• Intraswitch communications - Provides interprocessor communications for the 5410 Switch and
5430 Switch.
• Persistent storage - Contains the necessary information needed to support recovery of
established connections in the event of a CTM switchover or system initialization. After a
switchover occurs, each subsystem retrieves relevant information from the synchronized
primary and secondary persistent storage. Data is maintained real time between CTMs.
Persistence
Control Functions: Control Functions:
Management
TL1
Call Processing
Sig/Routing
SNC
Switch & XCON Control CORBA
Timing Control
Protection
System System
Integrity Integrity
LM LM LM LM
OTN/SONET/SDH I/F OTN/SONET/SDH I/F OTN/SONET/SDH I/F OTN/SONET/SDH I/F
Optical I/F Optical I/F Optical I/F Optical I/F
GCC/DCC/LAPD GCC/DCC/LAPD GCC/DCC/LAPD GCC/DCC/LAPD
Switch I/F System Switch I/F System Switch I/F System Switch I/F System
Integrity Integrity Integrity Integrity
SM SM SM SM
OTN Support
Starting with Release 2.0, the 5400 Switch supports Optical Transport Network (OTN) switching, as
defined by G.709, for transparently mapping clients into OTN containers, common end-to-end
management regardless of client type, and more efficient bandwidth utilization within the network.
Along with standard G.709 based mapping and multiplexing capabilities, 2.0 also supports OTN
Subnetwork Connections (SNCs), which leverage Ciena's long history of control plane leadership
and experience. This capability enables the user to create, manage, and protect ODUk/j SNCs from
either TSLM and/or OSLM line modules within a Ciena control plane enabled network (5400 Switch
and/or CoreDirector Switch).
OTN SNCs operate very similarly to traditional SONET/SDH SNCs: SONET/SDH SNCs originate
from STS-Xc/VC-Xc based Connection Termination Points (CTPs) and are routed over the network
via 5400's standard's compliant routing and signaling protocol. With the creation of OTN services,
SNCs are now able to originate and terminate on ODUk/j CTPs. ODU0 through ODU3 SNCs are
supported.
An OTN SNC can originate on an OTN, SONET/SDH, or Ethernet interface. For OTUk client
interfaces, the ODUk/j timeslots correlate to a similar entity as a SONET/ SDH STS-Xc timeslot. For
a SONET, SDH, or Ethernet client interface, the entire client is transparently mapped into ODUk/j
allowing for a true end-to-end reliable and transparent service. A SONET or SDH client interface is
mapped transparently into an OPU1 (OC48/STM16) or OPU2 (OC192/STM64), while an Ethernet
client is mapped into an OPU0(GbE) or OPU2 (10GbE).
The OTN SNCs use a standard's compliant G.ASON/GMPLS control plane, the same control plane
used by SONET/SDH SNCs, but operating as a separate instance. OTN links use the OTN GCC
overhead to transport control plane messages and SONET/SDH lines use the SONET/SDH DCC
overhead. These two independent instances can coexist within the same node as well as within the
same network operating independently as two distinct layers. SONET/SDH is supported together
with OTN in Release 2.0.
Because there are multiple layers in the OTN hierarchy, there are three separate communications
channels which can be used: GCC0 on the OTU and GCC1 or GCC2 on the ODU. Depending on
the configuration the user may choose to run OSRP over the GCC0, GCC1, GCC2 or a combined
GCC1/GCC2 channel. OTN links advertise the number of ODU3/2/1/0 supported on the line.
SONET/SDH links advertise bandwidth in terms of the number of STS-1/VC-3 timeslots available.
A given OTN interface may support both SONET/SDH and OTN connections, but the SONET
portion of that bandwidth must be allocated by the user and is not determined by the control plane.
An OTN SNC which carries SONET/SDH is OTN bandwidth. Likewise there is no need to define the
SONET/SDH bandwidth on intermediate nodes – only at the originating and termination points.
The flexible nature of the architecture allows a given port to be provisioned to support one of the
following interfaces:
• Optical Transport Unit (OTU) - When provisioned in this mode 40G ports operate as OTU3
interfaces and 10G ports operate as OTU2 interfaces. The 5400 multiplexes and de-
multiplexes the underlying ODU2 and/or ODU1 and/or ODU0 lambda units and cross connects
those ODUk/js directly. The following services can be carried transparently within the OTN
container:
• Gigabit Ethernet (GbE)
• OC-48/STM-16 (Transparent or CBR mode)
• 10GbE LAN PHY (10GbE)
• OC-192/STM-64 (Transparent or CBR mode)
Hardware Overview
This section provides an overview of the physical components in the 5410 Switch and the 5430
Switch. Chapter 4, The 5400 Switch Rack and Chassis on Page 97 provides detailed descriptions
of the 5410 Switch rack and chassis and the 5430 Switch rack and chassis. Chapter 5, 5400 Switch
Hardware Modules on Page 123 provides detailed descriptions of the available system modules
for both the 5410 Switch and the 5430 Switch.
filter to provide forced airflow from the front-center of the rack through the shelves, then exhaust the
warm air out through top and bottom vents at the rear of the rack/chassis. Two line module shelves
and one switch module shelf are located between the fan shelves and interface a common
backplane. The I/O module is located on the side of the rack/chassis and accessed at the rear of the
rack/chassis.
A fully populated 5430 Switch contains 30 line modules, 2 control and timing modules, and 9 switch
modules.
PDU Shelf
Assembly
Display Panel
Upper
Fan Shelf
Shelf A
Line Module/
CTM Shelf
Shelf C
Line Module/
CTM Shelf
Lower
Fan Shelf
Line Modules
LMs are located in the 5410 Switch service shelf A and 5430 Switch service shelves (A and C). The
5410 Switch shelf A holds any mix of up to 10 LMs. Each 5430 Switch shelf holds any mix of up to
15 LMs- OTN Services LMs (OSLMs) or TDM Services LMs (TSLMs).The 5400 Switch LMs provide
bidirectional ports through replaceable transceivers.
• OSLM-3 line module - Supports up to three 40G optical ports for OTN services only.
• 1x40G_OTN_PT21
• OSLM-3M line module - Supports up to three 40G optical ports for OTN services only.
• 1x40G_OTN_PT21
• OSLM-12 line module - Supports up to 12 10G optical ports for OTN services only.
• 1x10GbE
• 1x10G_CBR
• 1x10G_OTN
• OSLM-48 line module - Supports up to 48 2.5G optical ports for OTN services only.
• 4x250M_CBR
• 4x1GbE
• TSLM-12 line module - Supports up to 12 10G optical ports for OTN and SONET/SDH
services.
• 1x10GbE_SONET_SDH
• 1x10G_OTN_SONET_SDH
• 1x10GbE
• 1x10G_CBR
• 1x10G_OTN
• 1x10G SONET/SDH
• TSLM-48 line module - Supports up to 48 2.5G optical ports for OTN and SONET/SDH
services.
• 4x250M_SONET_SDH
• 4x1 GbE
• 4x250M_CBR
• SSLM-12 line modules - Supports up to twelve 10G optical ports.
• 1x10GbE_SONET_SDH
• 1x10G_OTN_SONET_SDH
• 1x10G_SONET_SDH
• SSLM-48 line modules - Supports up to 48 2.5G optical ports.
• 4x250M_CBR
• 4x1GbE
• 4x250M_SONET_SDH
Line Modules on Page 125 provides more information about the line modules.
Switch Modules
The 5410 Switch uses a 1:4 protected switch module configuration and the 5430 Switch uses a 1:8
protected switch module configuration. The switch modules provide the central part of the system
switching fabric. (The other part of the switching fabric is on the LMs.) Each switch module has paths
to all LMs. On the 5410 Switch, the switch modules are installed in the center of the shelf, below the
CTMs. In the 5430 Switch, the switch modules are installed in a chassis-wide shelf between the LM
shelves.
Switch Module on Page 149 provides more information about the switch modules.
Hardware Architecture
The 5400 Switch is an Optical-Electrical-Optical (OEO) switch with a state of the art data plane,
timing plane, and control plane.
As shown on the left and right sides of Figure 1-8, traffic enters and leaves the 5400 Switch data
plane by way of fixed or replaceable optics mounted on LMs after passing through the central switch
fabric. All LMs perform front-end OTN/SONET/SDH/Ethernet functions, such as framing, alarms
detection, and performance monitoring.
The 5400 Switch CTMs (top and bottom of Figure 1-8) are responsible for timing plane and control
plane functions.
A-CTM TimingFunction
A-CTM Timing Function
Port 1 Port 1
Switch
SwitchModule
Module 1
Port... Port...
Switch Module ...
Switch Module ...
Port... Port...
SwitchModule
Switch Module 9
Port... Port...
Port... A-CTM
CTM 1Control
Control Function
Function Port...
Port X Port X
C-CTM
CTM 2Control
Control Function
5430-09091
Figure 1-9 shows the three logical planes in the 5400 Switch hardware architecture.
5430-09081
Data Plane
The 5400 Switch data plane is a three stage, non-blocking, uni- and/or bidirectional switching fabric
capable of switching from any input port to any other output port. The data plane carries OTN,
SONET/SDH, and Ethernet line and path layer entities from input port to output port. The 5400
Switch supports both non-concatenated and concatenated payloads.
The switch fabric uses ODU packets for OTN and Transport Bandwidth Unit (TBU) of approximately
52.56 megabits per second (Mb/s). Each TBU corresponds to a clear-channel bandwidth of
approximately 52.56 megabits per second (Mb/s). The use of oversized TBUs in the switch matrix
enables support for OTN and Ethernet services. For SONET interfaces, each TBU supports one
STS-1/AU3 signal. In a fully configured 5410 Switch with SONET/SDH interfaces, a total of 11520
STS-1/VC-3 circuits can be established between ports. In a fully configured 5430 Switch with
SONET/SDH interfaces, a total of 34,560 STS-1/VC-3 circuits can be established between ports.
Timing Plane
In parallel with the data plane, the timing plane collects and distributes timing information in the 5400
Switch. Timing information is collected from external BITS interfaces, processed internally, and
distributed to external BITS interfaces. The collection, processing, and distribution of timing allows
a 5400 Switch to be integrated into OTN and synchronous digital networks.
The timing plane is responsible for:
• Providing a reference for OTN clients
• Selecting an external BITS and/or line interface for use as an input timing reference
• Filtering jitter and wander from the reference and generating an internal clock that is frequency-
locked to the reference
• Smoothing phase transients that occur during timing rearrangements
• Redistributing the clock to the SONET/SDH line interface and to the BITS interface
• Providing a reference for OTN clients in order to provide full timing transparency
• Providing a stable internal clock to be used when suitable external references are lost
(holdover)
Timing References
The 5400 Switch operates in a mixed timing mode that can use BITS/Station clock, line inputs, or
the internal Stratum 3E source to provide holdover or free run timing. The 5400 Switch supports two
modes of operation/filtering; Stratum 3E/G.812 Type III and G.813 SEC. The 5400 Switch uses the
best reference available from the provisioned references. The clock most commonly used is the
externally timed mode. An internal clock source is frequency-locked to the selected reference.
Wander and jitter are filtered from the clock source, and the filtered timing reference is forwarded to
the output line interfaces, where it is used to time the outgoing serial bit stream. The BITS input is a
signal formatted as either a SONET T1 (1.544 Mb/s) or SDH E1 (2.048 Mb/s) signal that carries no
user payload. The CTM internally recovers the 8-kHz frame rate of the BITS signal and uses it as a
frequency reference for the internal clock source.
The free run/holdover clock mode is used when no external reference are suitable for use. In
holdover mode, the internal clock freeruns; no input is selected by the internal clock to be used as a
reference. In holdover, the CTM clock retains the frequency of the external source prior to the
transition to holdover.
The line-timed clock mode uses a clock reference that is derived from any one of up to four line
signals. The 5400 Switch also supports line-timed mode with Sync Status Messaging (SSM). SSMs
are also supported for externally timed (BITS) references.
Mixed mode timing enables either of the external and line timing references to be used as the timing
reference, allowing the 5400 Switch to automatically select and synchronize to either external or line
timing references.
Mixed timing mode has the following feature requirements and behaviors:
• Mixed timing mode is only available on the Timing Input Protection Group (PG).
• Any of the four Protection Units (PUs) (REF_1, REF_2, REF_3 or REF_4) are eligible to be
selected automatically as the active timing reference for the PG. The other non-failed
protection units not selected as the active timing reference have standby Protecting states.
• The PreferSyncMode (Manual or Forced) allows any of the four PUs to be selected as the
preferred timing reference.
• Any port from an LM can be used as a reference however, only one can be used at a time. The
same reference can be used by multiple PGs.
• Every PU in a PG is monitored and alarmed independently. RefFailed alarms may exist against
each of the four PUs in the PG.
• The priority of PUs in a PG is dictated solely by the priority attribute of the PU in the given PG.
The reference selection algorithm chooses the highest priority PU as the active reference.
The 5400 Switch Timing Generation Subsystem (TGS) is designed for compliance with Telcordia
and ITU node clock requirements. Most 5400 Switch applications are timed by way of GPS and use
the holdover characteristics of the Network Element (NE) in the event of a reference failure. This
SEC mode is optional and is enabled using the 5400 Switch Command Line Interface (CLI).
Changing between TGS modes requires a timing plane reset, which is accomplished by way of the
CLI.
The reference output from both CTMs is forwarded to all LMs present in a system. On the LMs, the
distribution reference from the active CTM is selected, multiplied in frequency by a local Voltage
Controlled Crystal Oscillator (VCXO)-based PLL, and forwarded to all line outputs resident on the
LM.
The 5400 Switch supports T1 (1.544 Mbit/s) and E1 (2.048 Mb/s) output timing from the 5400 Switch
BITS interfaces. Output timing implementation allows the independent configuration of BITS
outputs. Each BITS output can be configured to provide a timing reference sourced from a hierarchy
of 5400 Switch line inputs.
Timing Source
The CTM contains a timing source of Stratum 3E quality. The quality designation refers to the free-
run accuracy of the source, the stability (oscillator drift during holdover), and the wander
characteristics of the source during timing rearrangements. A rearrangement occurs when the
reference is changed or when the source enters or leaves holdover.
The clock source consists of a highly accurate and stable Oven-Controlled Crystal Oscillator
(OCXO) and a digital phase-lock loop. In mixed timing mode, the timing source phase-locks to the
selected reference. In holdover mode, the timing source maintains the last frequency that was
available before entering holdover. Recovery from Hold-Over mode is automatic. Each CTM
performs periodic checks on the health of incoming references, and of the other CTM, by sampling
the frequency with reference to the OCXO.
Control Plane
The control plane provides a platform for all software-driven functions of the 5400 Switch. The
control plane provides provisioning, control, status retrieval, and maintenance of performance
statistics for the 5400 Switch. The control plane also provides an infrastructure for network-layer
tasks, such as topology discovery and call routing, and performs protection switching in the event of
failures. The control plane initiates and gathers the results of diagnostic tests, both autonomously
and in response to requests by the ON-Center software.
The control plane is responsible for:
• Initializing all 5400 Switch functions at system power up or when a field-replaceable component
is inserted
• Allowing user configuration of the data and timing planes
• Computing routes and reconfiguring the switch fabric during call setup and release, as well as
during protection switching
• Providing a computational platform for network layer tasks, such as topology discovery and call
routing
• Providing intermodule communications through the internal control network
• Providing an infrastructure for NE/NE and NE/Operating System communications, both in-band
OTN GCCs, SONET/SDH DCCs and out-of-band (Ethernet)
• Handling the protection switching protocols with adjacent 5410 Switches, 5430 Switches,
CoreDirector Switches, and with other NEs
• Performing automatic reconfiguration to protect user traffic in the event of local equipment
failures
• Collecting and relaying maintenance information, such as alarms and performance monitoring
data, in response to ON-Center Management Suite queries
• Gathering and aggregating performance statistics and providing nonvolatile (flash memory)
storage for statistics
• Activating alarm relay contacts and visual indicators in response to equipment failures and user
input
• Hosting local Craft interfaces for workstation access
• Maintaining part number, revision, and serial number information for each module in Electrically
Erasable Programmable Read Only Memory (EEPROM) storage (This information is retrieved
through use of the ON-Center Management Suite.)
The LM processors collect operating temperature data from temperature sensors located on their
associated circuit boards. They report this data at intervals to the CTM through the internal control
network. Temperature sensors located on the CTM are also directly accessible by the CTM
processor. The CTM processor uses this data to control the speed of the cooling fans.
In addition to these tasks, the CTM and LM processors are responsible for fault-monitoring the
circuitry resident on their respective modules.
Each field-replaceable module has a serial EEPROM for the storage of part number, revision, and
serial number information. The Common Language Equipment Identification (CLEI) code may also
be stored in the EEPROM. This information is obtained by the user through ON-Center queries. The
EEPROM is programmed with the information when the module is manufactured and is never
changed in the field.
Serial EEPROMs located on the backplane, PDU, and fans are readable by the CTMs by dedicated
serial links. The backplane EEPROM carries the part number, revision, and serial number
information for the rack or chassis assembly as a whole.
Resource Naming
Locations of the hardware components in the 5410 Switch and 5430 Switch chassis are identified
as a combination of bay, shelf, slot, and subslot (or port) references (for example, 1-A-1-8). The bay
designation is optional and not used in the following paragraphs.
The following sections describe how the hardware components are identified looking at the front of
the system:
• System Shelves and Fan Shelves
• 5400 Switch Module Slots on Page 24
• System Ports on Page 25
• Termination Ports on Page 26
CFU Shelf
Shelf CFUA
Shelf A
Shelf A
Line Module/
CTM Shelf
Shelf B
Switch Module
Shelf
Shelf C
Line Module/
CTM Shelf
Shelf CFUB
5400-11001
A-CTM
A-CTM1
A-CTM2
Slot A-1 Slot A-15
A-SM1
A-SM2
Shelf A
Shelf A
A-SM3
A-SM4
Slot A-1
Slot A-10 Slot B-1 Slot B-9
Shelf B
IO C-CTM
Shelf C
CFUB-1 CFUB-5
5400-11002
System Ports
Ports are identified by their position in a line module and by the slot in which the line module resides.
The optical modules or ports in the module are numbered from top to bottom however, cards
installed in the 5430 Switch C shelf are installed "upside down" such that they read from bottom to
top. All port numbers are identified by silk screening on the faceplate.
• On the LM-3 line modules, the optical ports are numbered 1 through 3.
• On the LM-12 line modules, the optical ports are numbered 1 through 12.
• On the LM-48 line modules, the optical ports are numbered 1 through 48.
This numbering scheme applies to the 5410 Switch shelf A and both shelf A and shelf C in the 5430
Switch. For example, the optical ports on an LM-48 module in Slot A-1 are identified as A-1-1 through
A-1-48. On an OSLM-12 module in 5430 Switch Slot C-1, the optical ports are numbered bottom to
top, this time as C-1-1 (bottom) through C-1-12 (top). Figure 1-13 illustrates the numbering scheme
in the LM-12 and LM-48 line modules.
LM-48 Port 2
LM-48 Port 48
LM-48 Port 1 LM-12 Port 12
LM-12 Port 1
LM-48 Port 41
LM-48 Port 8
LM-48 Port 40
LM-48 Port 9
LM-48 Port 33
LM-48 Port 16
LM-48 Port 25
LM-48 Port 24
LM-48 Port 24
LM-48 Port 25
LM-48 Port 17
LM-48 Port 32
LM-48 Port 2
Termination Ports
The 5400 Switch termination points are shown in Figure 1-14 and described in Table 1-2.
LTP LTP
CTP
Originating CTP Terminating CTP
XC CTP XC
Customer Customer
Premise OM OM OM OM Premise
Equipment Port Port Port Port Equipment
A-2-1 C-2-1 C-2-1 A-2-1
(CPE) (CPE)
5430-09078
Overview
This section describes the features of the Ciena® 5400 Reconfigurable Switching System
Infrastructure Software Packages in terms of Operations, Administration, Maintenance, and
Provisioning (OAM&P) functionality. The 5400 Reconfigurable Switching System is hereinafter
referred to as 5400 Switch.
The 5400 Switch OAM&P feature descriptions are arranged into the four key network management
functional areas of fault management, configuration management, account and security
management, and performance management.
• Fault management (below) - Features that enable detection, isolation, and correction of
abnormal operation. These include alarm and event detection, monitoring, forwarding, logging,
correlation, diagnostics, fault isolation, threshold violations, availability reporting, and audit
trails.
• Configuration/inventory management (Page 43) - Features that provision and configure the
5400 Switch equipment, termination points, cross connects, and protection groups. Features
include resource and inventory discovery, software download, configuration status,
configuration backup and restoration, resource provisioning and modification, restoration
scheme configuration, and gateway network element.
• Account and security management (Page 67) - Features that address security issues that are
essential to secure the 5400 Switch from unauthorized access. These include user
authorization, authentication, and access control.
• Performance management (Page 69) - Features that collect and evaluate network element
performance parameters. These include measurement, gathering, collection, consolidation,
reporting, and monitoring of statistics.
Fault Management
The 5400 Switch software fault management detects, isolates, and corrects system faults. Fault
Management includes event and alarm reporting, logging, filtering, and mediation features. Fault
information is reported to the user through the Transaction Language One (TL1) management
interface, the Node Manager Graphical User Interface (GUI), and is forwarded to ON-Center®
Network & Service Management Suite. Fault management requirements are aligned with the
methods described in ITU-International Standards Organization (ITU-T/ISO) and Telcordia
specifications. Appendix B, Specifications and Standards, 5400 Switch Standards on Page 224
provides more information.
Fault management consists of the following functions:
• Notifications (below)
• Alarm Monitoring (below)
• Logs on Page 33
• Diagnostics and Troubleshooting on Page 33
• Data Plane Fault Isolation (DPFI) on Page 40
• Control Plane Fault Isolation (CPFI) on Page 41
• Timing Plane Fault Isolation (TPFI) on Page 42
Notifications
Notifications inform the user about 5400 Switch status and alert the user to potentially critical events.
The user can collect specific information from all system-generated events.
Management events provide a means of monitoring the network. Node Manager and ON-Center
Suite display the events in real time to the user and maintain the event history. The following types
of notifications are supported:
• Alarms
• Standing Conditions
• Transient Conditions
• Warnings
Notifications, logged with a time stamp, notify the Node Manager about potentially critical
occurrences that should be verified and resolved by a technician (for example, when a card is
unexpectedly removed). The 5400 Switch supports categories as suggested by International
Telecommunication Union (ITU) standards. Values are Communications, Environmental,
Equipment, Processing, Service Report, Standing Condition, Quality of Service (QOS), and Usage
Report.
Alarm Monitoring
Alarm monitoring functions include determining alarm severity based on the traffic affecting nature
of the alarm, alarm surveillance, and alarm integration and decay times.
Alarm Severities
The 5400 Switch software facility alarm reporting provides accurate information concerning alarm
severity and the service affectability.
Table 2-1 summarizes the 5400 Switch software alarm behavior with respect to the port type, the
existing traffic type, the alarm type, and whether the interface is protected by protection switching or
path protection.
The first column of the table indicates the port type. The port type is either a drop-side port or a line-
side port (trunk). A Subnetwork Connection (SNC) contains two drop-side interface ports; all of the
intermediate ports of an SNC are line-side ports. Cross Connects span all ports.
The second column of the table indicates the traffic type configured on the interface. The traffic type
can be a cross connect, Permanent Subnetwork Connection (P-SNC), SNC (or some combination),
or no traffic. If multiple traffic types exist on the interface port, the alarm Service Affecting/Non
Service Affecting (SA/NSA) status and severity reflect the worst-case status. For example, the table
shows that a failed, unprotected line-side port containing an SNC should be indicating a facility alarm
as minor and non-service affecting. If that same port contained a P-SNC, it would indicate a facility
alarm as critical and service-affecting. If that port contained both a P-SNC and an SNC, it would
indicate the facility failure as critical and service-affecting. (In the worst case, the P-SNC could not
reroute around the facility failure.)
The column labeled Protected indicates whether another 5400 Switch interface is currently
protecting the port. If the entry indicates Yes, the interface port is currently protected by linear
protection switching or path protection port. If the entry indicates No, either the port is not configured
for a protection port or the protection port is unavailable.
The event column indicates either Failed or Degraded. The term Failed means an LOS, LOF, or AIS-
L/MS-AIS/OTU-AIS, or BER-SF has been detected. The term Degraded means that a BER-SD
(Signal Degrade) or DEG (Degrade) has been detected.
The last two columns indicate the resulting SA/NSA status and severity of the alarm based on the
previously described parameters. This alarm indication behavior affects only line facility alarms
(LOS, LOF, AIS-L/MS-AIS/OTU-AIS, BER-SF, DEG, and BER-SD). It does not include any change
of behavior for path alarms, cross connect alarms, or SNC alarms.
As Table 2-1 shows, there are four input parameters that determine the SA/NSA status and alarm
severity. Because the parameters can change after the alarm posts, 5400 Switch software updates
alarm status and severity to reflect the change of any of the input parameters. For example, if a cross
connect is on a port that is failed but is being protected by another port, the facility alarm is marked
as NSA and major. If after the alarm posts, the port can no longer be protected, the facility alarm is
updated to indicate SA and critical. If at that point, the event condition changes from failed to
degraded, the facility alarm status updates to indicate SA and major. If the port can be protected
again before the alarm condition clears, the alarm status should return to NSA and major.
Alarm Surveillance
Alarm surveillance monitors the health of in-service or standby equipment and notifies users when
faults occur. Alarm surveillance and maintenance signals occur on a per-optical interface basis.
They include:
• Facility Alarms
• Protection Switching events
• Hardware/equipment
• Configuration changes
• Standing conditions
• Defect status
• Failure status
• Failure count
Table 2-1 describes the alarm notification types used in the 5400 Switch.
Normally, critical and major alarms are service-affecting and minor alarms are non-service-affecting.
5400 Switch software applies appropriate Service-Affecting/Non-service-Affecting (SA/NSA)
assignments and appropriate severities to facility alarms as described in Alarm Severities on
Page 30.
Logs
A log records a sequence of events (for example, user actions, configuration changes, and alarm
conditions) to aid network operators. The 5400 Switch logs are predefined revolving (or circulating)
databases.
The following logs are defined:
• Alarm Log - Logs all generated alarm events
• Severity
• Time Stamp
• Probable Cause
• Alarm Type
• Event Log - Logs all generated events
• Event type
• Time stamp
• Audit Log - Logs all generated audit trail events
• User
• Command
• Time Stamp
• Security Log - Logs all user administration and log-in events
• Additional diagnostic logs are available through Command Line Interface (CLI) for Ciena
support staff.
Circuit Test
A circuit test port designates an optical module port to function as testing port. A circuit test port
performs circuit connectivity verification and circuit quality testing using an end-to-end connection
(such as a Subnetwork Connection (SNC)) and injecting a path trace message. A successful circuit
test yields no near-end path errors (B-3 errors).
• Path trace (set in the Path Overhead (POH)) can be used for end to end connection
continuity and protects against possible misconnections in the network.
• A Path trace message is sent at the originating node, and the received message is
configured on the destination node.
• If any discrepancies are observed between the two nodes, the end-to-end connection is not
complete and requires further debugging.
• The 5400 Switch detects and monitors near-end and far-end coding violations, errored
seconds, severely errored seconds, unavailable seconds, and failure counts.
The 5400 Switch software supports the SDH equivalent of SONET section and path level tracing in
addition to trace mismatch detection. The SDH standards for trace messages differ slightly from
SONET standards and are supported accordingly. Specifics of the SDH trace capabilities include the
following:
• Regenerator section trace (J0). Configurable as 16-byte Access Point Identifier (API)
• Path trace (J1). Configurable as:
• 16-byte API (SDH)
• 62 user-defined bytes (null padded) (SONET) with Carriage Return (CR)/Line Feed (LF)
delimiter
• Provisionable outgoing trace messages
• Provisionable expected trace messages
• Alarmable condition on mismatch
Loopbacks
Terminal Loopbacks
Terminal loopbacks configure a port so that the receive direction functionally receives data from the
transmit direction. Conceptually, this is similar to creating a loopback by using a fiber connected to
the Transmit and Receive ports. Terminal loopbacks are achieved in the Serializer/Deserializer
(SERDES) in the line module (Figure 2-2).
OFF
LOS
Facility Loopbacks
In a facility loopback, the transmit direction functionally receives data from the receive direction.
Conceptually, a facility loopback acts like a mirror by taking data coming into a port and transmitting
it back out that same port. Facility loopbacks are either performed in the SERDES or in the Framer.
For SONET/SDH, the 5400 Switch supports facility and facility with framer. (Figure 2-3).
Note: Facility loopbacks with framer depend on the traffic configuration in the framer. A cross
connect must be configured for a port before that port can effectively carry framed
facility loopback traffic.
TX
Figure 2-5 illustrates two CTPs (CTP1 and CTP2) provisioned on a node as an unprotected
bidirectional circuit. The four conceivable types of loopbacks involving the CTPs (CTP1, and CTP2)
are also displayed in this figure. Each CTP has an F-side (or the facility side), and an E-side
(equipment side). The E-side corresponds to the switch fabric side. The types of loopbacks applied
to a CTP are termed facility and equipment loopbacks, and are defined as follows:
• Facility Loopback - refers to the loopback in which the data received by way of the CTP is
transmitted back out the same CTP. AIS propagation happens downstream.
• Equipment Loopback - refers to the loopback in which the data that is transmitted onto the CTP
is treated as the input received by way of that CTP and transmitted downstream. AIS is
propagated towards the facility side.
5410/5430 Switch
LM LM
5400-0003
The CLL functionality for Unprotected circuits, linear protected circuits, and signaled SNCP circuits
is the same.
• Loopback is applied to the CTP before the selector. If there is a switch to protect, then the
loopback will no longer be applied and traffic flows as if there were no loopback.
• Loopback is applied to the CTP after the selector. If there is a switch to protect, then the
loopback will still be applied, traffic will still be looped back and AIS-P will be sent downstream.
Egress (transmit)
E D
TAP
Test Access Connection
Port
Tester
Remote TAP
The 5400 Switch supports the Remote TAP feature, which enables the user to monitor a circuit in
any node from a remote node. The remote TAP is constructed by creating a virtual Test Access Link
(TAL) and then connecting a SNC from a remote node to the virtual TAL. The virtual TAL consists
of one or two connection points depending on the mode selected by the user.
Remote TAP involves the following.
• User selecting the cross connect or the A-SNCP and locking it for test.
Note: If the cross connect that is tapped gets deleted for any reason, the TAP connection
would be deleted.
Note: Qualifiers SELF and COMMON are supported for all TAP monitoring modes on all
protection schemes.
Note: Qualifier PROTECT is supported for TAP on linear protected circuits.
Note: SPLIT modes are allowed for TAP on all unprotected or linear protected circuits.
Note: SPLIT modes are allowed for TAP on path protected circuits with COMMON qualifier
only.
Hardware Diagnostics
The 5400 Switch software equipment diagnostics include detecting equipment failures and
performing Control Timing Module (CTM) standby switchovers and CTM identification.
Equipment Failures
The 5400 Switch software detects and reports the following equipment failures:
• Fuse/power circuit failures
• CFP/XFP/SFP failures
• Switching matrix failures
• Internal communications circuitry/hardware
• Timing circuitry/hardware
CTM Switchovers
The 5400 Switch software includes software that automatically switches from the primary CTM to
the secondary CTM when a failure on the primary is discovered, or when communication with the
primary is lost.
• CTM redundancy behavior has the following characteristics:
• If the primary CTM fails, the secondary CTM becomes primary.
Note: Mesh restoration is unavailable for a short period of time during the CTM switchover.
The CTM takes several minutes to assume the primary CTM tasks.
A-CTM TimingFunction
A-CTM Timing Function
Port 1 Port 1
Switch
SwitchModule
Module 1
Port... Port...
Switch Module ...
Switch Module ...
Port... Port...
Switch
SwitchModule
Module 9
Port... Port...
Port... A-CTM
CTM 1Control
Control Function
Function Port...
Port X Port X
C-CTM
CTM 2Control
Control Function
5430-09091
ACO/Alarm Connector
The I/O module has a DB-15 Alarm connector which provides alarm outputs from CTM Modules to
the IOM. The alarm outputs are visual major, minor, critical alarm signals and a summary audio
alarm. Ordering Guide on Page 161 provides alarm cable ordering information.
In addition to the visual and audible alarm connector, an Alarm Cutoff (ACO) button on the 5400
Switch display panel is provided to inhibit the audible outputs.
Configuration/Inventory Management
The 5400 Switch software configuration/inventory management includes equipment inventory
management and provisioning. Configuration and inventory information can be accessed using the
TL1 management interface or the Node Manager Graphical User Interface (GUI) and is forwarded
to the ON-Center Suite.
Current configuration information for the 5400 Switch includes the following:
• Node and node-relevant parameters
• Module inventory and configuration
• Physical port configurations
• Cross Connect information
• Protection configurations (linear protection switching)
Configuration/inventory management includes the following functions:
• Equipment Inventory on Page 43
• Control Timing Module (CTM) Branding on Page 43
• Cross Connect Provisioning and Connection Management on Page 44
• Protection Configuration on Page 45
• Network Synchronization on Page 56
• SONET/SDH IP over DCC and OTN IPoGCC with GNE support on Page 58
Equipment Inventory
Equipment inventory provides physical inventory knowledge of the following switch components:
• Module Access Identifier (AID) (by bay, shelf, slot, and subslot)
• Module information, including card type, serial number, Common Language Equipment
Identification (CLEI), and firmware and software revision
• Fan tray assembly (monitoring of speed and load)
• Power Distribution Unit (PDU) (monitoring on each powered arm)
This information is available through Node Manager, TL1, and ON-Center Suite.
End Points
The end point of a cross connect can be either a CTP or a GTP. A CTP is the transport entity that
terminates a path-level connection, such as an ODU1/STS-1/VC3. A CTP is contained by a lower
level TTP; for example, an OTN/SONET/SDH line TTP can contain multiple OTN/SONET/SDH path
CTPs. Multiple CTPs can be preconfigured for the same time slot on a single port.
A GTP (SONET/SDH only in R2.0.0) is a collection of similar CTPs that are treated as a single
administrative object. GTPs are used to model ODUN/STS-N/VCN end points which are considered
a collection of n ODU1/STS-1/VC3 CTPs. The GTP is used to represent the end point of a bundled
connection in which all constituent CTPs are routed, provisioned, and restored as a single
connection.
DCC/GCC Transparency
Some application configurations require the interconnection of DCC/GCC message traffic even
when the ability to perform standard message routing of the protocols carried over the DCC/GCC is
not supported. The 5400 Switch provides DCC/GCC transparency capabilities by cross connecting
any incoming DCC/GCC channels to any outgoing DCC/GCC channel.
The 5400 Switch extracts DCC/GCC traffic from the incoming facility interface connected to any
ingress port on any line module and transfers that traffic to the backplane Ethernet communications
channel connecting the line module to the controller module. The traffic is routed to the same or
another line module through the communications Ethernet switch on the controller. The egress line
module then inserts the DCC/GCC traffic onto the outgoing facility interface connected to the
selected egress port.
The transparent connection feature provides the following capabilities:
Bidirectional transparent DCC connectivity between any two SONET/SDH ports.
Note; DCC transparency and 4F-BLSR OSI transparency are not supported at the same time on the
same SONET/SDH port.
• Transparent connections are supported on all OC-48 and OC-192 line terminations including
those directly connected to the physical interface and for OC-192 embedded within OTN
connections.
• Transparent connections are supported on any SONET/SDH TTP (OC-3/12/48/192/768 or
STM-1/4/16/64/256)
• Transparent connections are configurable for section or line DCC channels.
TTPs can only be associated with associated with one transparent connection.
Protection Configuration
SONET/SDH line protection can be applied between two interconnected 5430 Switches and/or
CoreDirector Switches at line rates from OC-3/STM-1 to OC-192/STM-64. In linear 1+1 protection,
the same signal is sent (bridged) to two separate SONET/SDH lines and the receiving 5430 Switch
selects the best signal to use. In linear 1:N protection, any number of working lines (N) share one
protection line, which can also be used to carry extra traffic when the protection line is unused for
protection switching.
Protection switching events automatically occur under two general categories of conditions: signal
fail and signal degrade. The signal fail is a hard failure condition such as loss of signal, loss of frame,
and AIS-L/MS-AIS. In addition, signal fail is declared when the line Bit Error Rate (BER) exceeds a
user-defined threshold of between 10-3 to 10-5. The signal degrade condition is a soft failure
condition triggered when the line BER exceeds a user-defined threshold of between 10-5 to 10-9.
These threshold settings are associated with individual SONET/SDH lines connected to the 5430
Switch ports.
1+1 Protection
The 5430 Switch supports 1+1 line level protection for drop-side traffic. In 1+1 protection, one
protection line is assigned to each working line. The payload (traffic) is always sent (bridged) on the
protection as well as the working line. This has the disadvantage of preventing the protection line
from carrying extra traffic, but has the advantage of fast protection switching times as well as
interoperating with most other networking equipment. The 5430 Switch supports both bidirectional
and unidirectional switching. The unidirectional mode of 1+1 operation does not require using the
protection switching K byte protocol; that is, no coordination is required between the two end
systems. In this case, both end systems transmit two identical signals on separate SONET or SDH
lines and choose the better of the two received signals. 1+1 protection is supported on both the line
and drop (client) sides.
A system using 1+1 protection operates by default in a unidirectional mode. In this mode, the
switching is complete when a channel in the failed direction is switched to the protection line.
However, a bidirectional mode can be provided as a user-configurable option. In this mode, a
channel is switched to the protection line in both directions. Switching in one direction only is not
allowed.
A 1+1 system also uses nonrevertive switching as a default. In nonrevertive switching, a switch to
the protection line is maintained even when the working line has recovered from the failure.
Revertive switching can be a user-configurable option. Traffic is switched back to the working line
when the working line has recovered from the failure or when the manual command is cleared.
Protection switching events automatically occur under two general categories of conditions: signal
fail and signal degrade. The signal fail is a hard failure condition such as loss of signal, loss of frame,
and Alarm Indication Signal Line (AIS-L). In addition, signal fail is declared when the line Bit Error
Rate (BER) exceeds a user-defined threshold of between 10-3 to 10-5. The Signal Degrade condition
is a soft failure condition triggered when the line BER exceeds a user-defined threshold of between
10-5 to 10-9. These threshold settings are associated with individual SONET/SDH lines connected
to the 5430 optical modules which interoperates with CoreDirector Switches.
1:N Protection
5430 Switch Linear 1:N protection allows up to 15 working lines to be protected by 1 protection line.
The 5430 switch supports standard 1:N (N<14) and a proprietary 1:15 configuration. In standard 1:N
(N<14), the protection line can also be used to carry traffic when the line is not used for protection
purposes. This traffic is referred to as extra traffic in the various APS or MSP standards and is
subject to preemption if it is necessary for the protection line to protect one of the working lines. In
this proprietary implementation (1:15), the protection line can not be used to carry traffic.
For each working channel, the user can assign a protection priority (high or low). This priority is used
to determine which requests for protection take precedence in the APS or MSP protocol. In the case
of equal priorities, the channel with the lowest APS or MSP channel number is given priority under
certain circumstances. The APS or MSP channel number is a user-definable protection attribute for
the line and is distinct from other line identifiers or labels. It must be consistently set on each end of
an SONET/SDH line. The 5430 Switch supports bidirectional 1:N protection switching. This means
that in case of a failure condition in either receive or transmit direction, both directions switch to the
protection line. This is the preferred behavior if the failure is an equipment-related failure because
the protection switch removes all traffic from the failed line and maintenance can proceed without
additional service interruptions.
In a 1:N architecture, all switching is revertive to free up the protection line for subsequent failures,
if and when they occur.
The APS or MSP protocols require assigning channel numbers to all working lines in an APS or MSP
group. The numbers between 1 and 15 are assigned to the working lines independently of their port
numbers. The number 0 is reserved for the protection channel (sometimes referred to as the null
channel). Figure 2-6 illustrates an example of channel numbering in a 1:N group involving a
protection channel and two working channels. The operator must ensure that the same channel
number is assigned to both ends of the line that connect the 5430 nodes when using TL1 commands
or the graphical Craft Interface to manually assign channel numbers. In Figure 2-6, SONET lines
between the Node A and Node C are designated by a shelf-slot-port convention. For example, Port
A-1-3 is the third port on the LM occupying the first slot on Shelf A. The shelf lettering, slot numbers,
and port numbers may be different for different 5430 Switch models and configurations.
A-1-3 A-7-4
Working Channel 11
A-3-2 C-1-6
Working Channel 14
C-5-7 A-5-1
Protect Channel
Node A Node B
5430-10049
Ring 1
NE 4 NE 4
Ring 2
NE 5
NE 1
5400-11029
NE 2 NE 5
VLSR Capabilities
The VLSR Protection Scheme provides the following capabilities:
• Single-rate OC-192/STM-64 VLSR rings (equivalent to 4-fiber BLSR/MS-SPRing protection)
• Maximum of 16 nodes per ring
• Automatic ring map generation and distribution to each member of a closed VLSR Ring
• 5400 Switch and CoreDirector participation in multiple distinct rings (up to 64 possible, certified
for up to 16)
• Manual and forced span and ring switching in addition to other maintenance operations
• Extra traffic on the protection lines
• Interoperation with OSRP (That is, VLSR supports OSRP-based connection establishment,
tear-down, and restoration.)
• End-to-end protection of extra traffic that is preempted during a span or ring switch, using
FastMesh mesh restoration
• Mesh protection to restore any extra traffic on the protection channels that cannot be restored
by VLSR protection mechanisms (or restoration by OSRP if the extra traffic was provisioned as
SNCs)
• Mesh protection to restore any working traffic that cannot be restored by VLSR protection
mechanisms (for example, due to multiple span failures) (or restoration by OSRP if the traffic
was provisioned as SNCs)
• Support for node deletion from a ring. If there is no spare network capacity, the node deletion
disrupts service during the time that the ring is reconfigured. If there is spare network capacity,
SNCs mesh restore during the reconfiguration process.
• Support for revertive switching with support for a wait-to-restore mechanism that can be set to
infinite
• Intersection of multiple VLSR/MS-SPRing rings at a 5400 Switch and CoreDirector Switch
nodes
• No need for matched nodes, simplifying ring interconnection
• 5400 Switch and CoreDirector Switch participation in multiple independent VLSR rings
• Alarms for ring misconfiguration. For example, an alarm is raised if:
• Ring ID is not identical for all members of the ring
• Incorrect or inconsistent neighbors are identified on the ring
• XCONs supported over VLSR rings
• Support for bidirectional switching only
• All SONET or all SDH facilities in the ring (They cannot be mixed.)
Working
Protect
Protect
Working
ec g
ot in
Pr ork
W
NE 3 NE 4
o r te c
W
Pr
ki t
ng
o
Pr rki
ot ng
W
ec
o
t
ki ct
W rote
ng
P
or
ot ng
NE 1 NE 3
Pr r k i
t
ec
o
W
W rote
or
P
ki ct
ng
Pr rki
ot ng
W
ki ct
ec
o
W r o te
ng
t
P
or
Working
Protect
Protect
Working
5400-11030
NE 2 NE 5
BLSR Capabilities
The 4F-BLSR/MS-SPRing Protection Scheme provides the following capabilities:
• 10G OC-192/STM-64 BLSR/MS-SPRings
• Standards based maximum of 16 nodes per ring
• Automatic ring map generation and distribution to each member of a closed 4F-BLSR/MS-
SPRing
• 5400 Switch and CoreDirector participation in multiple distinct rings (up to 64 possible, certified
for up to 16 )
• Manual and forced span and ring switching in addition to other maintenance operations
• Extra traffic on the protection lines
• Support for revertive switching with support for a wait-to-restore mechanism that can be set to
infinite
• Intersection of multiple BLSR/MS-SPRing rings at a 5400 Switch and CoreDirector Switch
nodes
• No need for matched nodes, simplifying ring interconnection
• 5400 Switch and CoreDirector Switch participation in multiple independent 4F-BLSR/MS-
SPRing rings
• Alarms for ring misconfiguration. For example, an alarm is raised if:
• Ring ID is not identical for all members of the ring
• Incorrect or inconsistent neighbors are identified on the ring
• Support for bidirectional switching only
CTP/GTP
VCP_A_0 configured for Multicast
VCP_A_0 CTP/GTP
SNC
CTP/GTP
VCP_A_1
SNC Originates on Autocreated CTP –
VCP pair – allows free to change on
CTPs to be modifed Mesh Resotration
without affecting SNC
VCP_A_1
VCP_B_0
CTP/GTP CTP/GTP
Arbitrary SNCP
Arbitrary SNCP (A-SNCP) allows for a path protection unit (PU), where the work/protect CTPs can
be arbitrarily added and removed from the PU. The only restrictions being that the CTPs have the
same concatenation, and exist on the same NE. There is no restriction relative to line rate or timeslot.
Provisioning "back-to-back" A-SNCP (Figure 2-14) is also supported where both the drop side and
the network side of the connection are protected. This back-to-back feature is useful for protected
ring-to-ring interconnect and is compatible with SONET UPSR.
A-SNCP can be used in combination with linear protection switching. A-SNCP is supported on
SONET/SDH lines embedded on an OTN interface with no additional restrictions. Unlike Signaled
SNCP, SNCs cannot terminate on an A-SNCP protected CTP. A-SNCP protection is only supported
on the client side of the network for OSRP mesh protected applications.
Routing A-SNCP protected paths over APS protection groups is well suited for submarine systems.
The APS lines are not diversely routed, thus the APS protection is used to protect against equipment
failures and not fiber failures. The A-SNCP protection is used to protect against fiber and/or path
failures. The A-SNCP paths can be routed over APS 1+1 or 1:N protection groups. See A-SNCP
Over APS on Page 55 for additional information.
An arbitrary SNCP connection is illustrated in Figure 2-15. In this illustration, an SNCP protected
service originating on a DSLAM is carried back to the 5430 Switch by way of a fully equipment and
facility diverse path. The traffic is carried on separate rings of differing bandwidth and no restriction
is made for which timeslot the traffic is carried in.
OC-48/STM-16
OC-192/STM-64
5430-09109
Note: SONET/SDH A-SNCP paths routed over embedded SONET/SDH protection groups on
an OTN interface is not supported.
• When A-SNCP protected paths are routed over one or more APS protected lines, the
protection switch criteria is dependent on the SONET/SDH path or line defects respectively.
Since a SONET/SDH line defect causes a SNCP path defect, both switch criteria are met, and
both switch selectors are switched. However, if the user provisions a hold-off timer on the A-
SNCP protection switch for a period longer than the APS restoration time, the A-SNCP does
not switch. Also, if the APS cannot restore traffic within the hold-off period and the path defect
is still present, the A-SNCP selector switches.
• CTP test access is supported in Self Mode and Common Mode however, CTP test access is
not supported when the CTP is in Protection Mode.
• TTP Connection level loopbacks and line level loopbacks are supported.
Network Synchronization
The 5400 Switch software supports Building Integrated Timing Supply (BITS), which uses
Synchronization Status Messaging (SSM) to assist in selecting the proper reference.
The 5400 Switch software supports either DS-1 or E1 BITS input references when the proper I/O
Panel is installed. The 5400 Switch software supports Mixed timing Mode in which both BITS and
line timing sources are used.
The 5400 Switch software provides hitless switchover support in the event of a CTM timing failure.
The 1:1 equipment redundancy ensures that a maintenance shutdown or failure of a CTM results in
smooth transition to the alternate CTM without hits on data traffic.
The 5400 Switch supports ITU timing requirements. The CTM supports E1 timing references. The
E1 signal contains a Frame Alignment Signal (FAS) frame alternated with a Non-FAS (NFAS) frame
with a user-selectable location (SA4, SA5, SA6, SA7, or SA8) for the SSM message encoding. The
section Line Timing with SSM (below) provides more information about line timing with SSM.
Incoming references are either T1/DS1 or E1.
Table 2-5 lists the values of E1 and SDH SSM formats as defined by G.704.
The 5400 Switch selects the reference with the highest quality SSM value as the synchronization
reference. When an SSM source changes the SSM value, the 5400 Switch software evaluates the
change to see if the changed SSM value is better than the currently selected reference. Based on
the value, the 5400 Switch can respond in one of the following ways:
• Switch the timing reference
• Enter holdover/free-run clock mode
• Recover from holdover/free-run clock mode
Table 2-7 lists the SSM translation from SDH definitions to SONET definitions.
are OTN links. Although a single link is shown between NEs and 5400 Switches, there might actually
be more than one fiber pair between them, and each may have a GCC session on it. The
connections between DCN and NEs (including 5400 Switches) may be WAN or LAN connections.
Note: When a 5400 Switch is connected to 4200 equipment, both the 5400 Switch and 4200
devices must be provisioned to use GCC0.
As Figure 2-16 shows, not all NEs have DCN connectivity. The IPoGCC feature gives the 5400
Switch the capability to establish and maintain a communication channel between OSS
(Management Station) and the NEs.
The far left configuration in Figure 2-16 shows the 5400 Switch connected to a gateway network
element. This application is aimed at managing subtended NEs that participate in a UPSR/SNCP
ring. The middle configuration shows a 5400 Switch that is itself part of a ring. Note that in the first
configuration the 5400 Switch does not actually participate in the ring. The third configuration shows
a redundant connection to END NE that has one connection to the DCN. Also it shows an END NE
being managed through a GNE in a linear configuration. This application is aimed at managing
subtended NEs that are not a part of the ring.
• The 5400 Switch IPoGCC feature is supported by 5400 Switch TSLM-12/OSLM-12 and TSLM-
48/OSLM-48 line modules.
• The 5400 Switch IPoGCC feature can easily manage subtended 4200 rings.
• The 5400 Switch IPoGCC feature supports prioritization of external management IP packet
processing, where possible, to the lowest possible level.
The IPoGCC feature has the following operational requirements:
• There can be only one GCC channel per port.
• IP over GCC requires a DCN topology with unique IP addresses.
• IP addresses must be static.
• IP addresses of non-4200 GNEs must be on the same subnet.
• IPoGCC requires static routes in routers for all directly connected target 5400 Switches. In a
4200 ring, there must be both a static route for the 4200 Platform and a subnet mask.
• 4200 nodes on a ring must be within the same subnet as the 4200 GNE and be specified by a
subnet mask provisioned in the 5400 Switch.
• Subtended NEs may not be multi-homed.
• IP over GCC is subject to throttling, which limits the number of packets presented to a line
module to 2400 per second. This limit affects the amount of IPoGCC traffic that the GNE can
forward to the target NE.
• When IPoGCC is used to support Automatic Protection Switching (APS) for 1+1 configurations,
the protection group must be set to bi-directional. Uni-directional APS 1+1 groups are not
supported when using IPoGCC.
et
Network
rn
he
Et
-4 8 OC
OC NE-2
-48
10.20.25.12/24
Node OC -4 8
-4 8 OC
Manager
NE-1 NE-3
10.20.25.11/24 10.20.25.13/24
NE-4
10.20.25.14/24 5430-09111
Traffic Flow:
Packets from EMS to an NE reach the Router A (Router A has advertised the subnet to the IP
network). RouterA in turn sends an ARP request for the destination addresses. GNE1 and GNE2
are both configured to respond to ARP request for all the NEs in Area1. The most recent response
to the ARP request determines the GNE selected as gateway.
Both GNEs present themselves as default gateways to the data network (by configuring default
route). This causes NEs to send packets destined to EMSs to the closest GNE. For example the
EMS request to NE-5 can go through NE-1 or through NE-2.
Redundancy:
As shown in Figure 2-18, when one GNE fails the other GNE acts as the gateway. However, the
router connected to the GNEs (RouterA) does not know about the GNE failure and continues to
forward packets to the failed GNE. Routers are configured to age out the ARP entries after a certain
period (normally 30 minutes). Once an APR entry is aged out, RouterA sends out a new request.
This causes the healthy GNE to respond to the ARP request, which in turn results in gateway
switchover. RouterA's ARP aging period can be set to a smaller value to reduce the switchover time.
Traffic Flow:
RouterA and RouterB advertise subnet 10.20.25.0/24 as reachable. Depending on the external IP
network's configurations, packets from EMS to NEs can either go through RouterA, RouterB, or both
(load balancing).
Both GNEs present themselves as default gateway to the data network (the user must create static
default routes). This causes NEs to choose the closest GNE as the gateway to the EMS. For
example the EMS request to NE-5 can go through NE-1 and responses go though NE-2.
In some network configurations it may be desirable for all packets to go through one GNE in both
directions (shortest path bypass). For this reason, users can to set a Weight to set static default
routes. NEs select the default route with the lowest Weight even thought it is not the shortest path.
NEs switch to the other default route when the preferred route is not reachable.
Redundancy:
As shown in Figure 2-19, RouterA and RouterB are directly (not routed through a HUB or switch)
connected to GNE1 and GNE2, respectively. When a GNE reboots, the Ethernet connection to the
neighboring router goes down. The neighbor router then announces the network as unreachable
from its end. This causes the IP network to forward the packets to the other GNE.
This method works as long as the internal network is not disjoint. As shown in Figure 2-20, NE-2,
NE-3 and NE-5 are not reachable from GNE-1. However, RouterA will not be aware of this
information. This will cause the packet destined to NE-2, NE-3 and NE-5 to be dropped, provided
that RouterA was selected as the shortest path).
• The 5400 Switch allows the user to enable or disable password authentication a line by line
basis.
• The 5400 Switch provides the ability for the user to manually configure a default route to an
external router on an external DCN network connected to one of Ethernet ports. The 5400
Switch advertises itself to the rest of the OSPF network as one of the default gateway(s) to the
DCN. A 5400 Switch with a connection to the external DCN network is called Gateway NE
(GNE).
Security Features
The 5400 Switch software includes security enhancements at the NE level, including one-way
password encryption using an improved 3-DES encryption algorithm and an automated time-out on
inactive logon sessions.
The major security management features are listed below:
• User authentication - Ensures that people who are not granted explicit access to a specific
5410 Switch or 5430 Switch are prevented from performing or viewing anything on the switch.
• Security log - Provides a detailed record of all user activity including security breaches, invalid
user logons, and failed attempts.
• Security alarms/alerts
• User authentication attempts, whether successful or not
• Session terminations, whether initiated by the user, by an administrator or automatically due
to time out or failures
• Changes to the security settings
• Autonomous actions that may affect the operational continuity of the system, such as loss
and restoration of management communications, CTM switchover, initialization, software
release upgrade or reversion
• Multiple distinct role-based user authorization levels - At the node level, users are configured
with access to an individual 5410 Switch or 5430 Switch. This makes it possible for a person to
have access to only part of the entire network. The system supports up to 512 different user
logon identities; however, this does not imply that all 512 users can be connected to the 5410
Switch or 5430 Switch at the same time.
• Predefined access levels - At the node level, users are configured on a per-switch basis with
the appropriate authorization level(s) as defined in Table 2-8 on Page 67.
• Auto logoff - The 5400 Switch software monitors the activity of each logged on user. If a user is
inactive for a configurable period of time, the user is automatically logged off. Inactivity is
defined as absence of user-initiated modification (configure, create, and so forth) to the 5410
Switch or 5430 Switch.
• Administrators can change the inactive time-out. The default value is 60 minutes and the
valid range for user inactive time out is from 0 to 999 minutes. Setting the inactive time-out
to 0 disables auto logoff.
• When an inactive time out occurs, the 5400 Switch sends an event notification to the client
logged off, indicating the user has been logged off.
• Password deactivation/account lockout - After several unsuccessful logon attempts, the 5400
Switch software deactivates the user account for a certain period of time. To prevent hacking,
the 5410 Switch or 5430 Switch prevents user logon even if the right password is entered within
the deactivated period of time.
• The number of unsuccessful logon attempts is configurable by the administrator. The default
value is five, with a valid range of zero to ten (a value of zero means never disabled).
• The 5400 Switch software reports an event notification in response to a configured number
of unsuccessful logon attempts to the switch. The event notifies the Account Administrator
about possible intrusion, and all attempts are recorded in the audit log for documentation
and audit trail purposes. CLI logons are tracked separately from other logons.
• The 5400 Switch software disables the user account for a period configured by the
administrator. The valid range for user inactive time is from 0 to 30 minutes. If an inactive
time is set to zero, the password deactivation function is disabled. The default value is one
minute.
Performance Management
Performance management consists primarily of performance monitoring and threshold alarming and
notification. The objective of performance management is to collect data continuously to evaluate
individual connection performance. Performance management provides Near End and Far End
performance monitoring data on OTN interfaces for Section Monitor (SM), Tandem Connection
Monitor (TCM) and Path Monitor (PM). For SONET/SDH Near End and Far end PM Is provided for
Section/RS, Line/MS, and Path.The 5400 Switch allows an operator to configure performance
thresholds for each circuit.
The 5400 Switch software provides Path PM data collection, TCAs, drop-side Path PM, and alarms
on the SNC drop-sides by default. Path PM collection and TCAs are disabled by default on
participating CTPs to improve PM data management.
Thresholds
Performance management thresholds are categorized into the following two groups:
• Current 15-minute threshold
• Current 24-hour threshold
Current 15-minute and current 24-hour thresholds are used for TCAs. Operators can configure
15-minute and current 24-hour thresholds for each section, line, and path layer statistic.
Statistics
Statistics collected as part of 5400 Switch performance monitoring functions include real-time and
historical statistics for the physical layer, section layer, line layer, and path.
Software Upgrade
5400 Switch software provides two types of software in-service upgrade capability that do not affect
traffic. The Failover Upgrade procedure is used to upgrade the software when the Operating System
(OS) and related software needs to be upgraded. The In Place Upgrade procedure is used when
only application subsystem software needs to be upgraded or the OS in the new software is
backwards compatible with the earlier OS. The upgrade procedure used is dictated by the content
of the release and is determined automatically by the release image based on software compatibility
considerations.
The upgrade procedure performs the following functions:
• A new software image folder is created on CTM flash memory.
• The software image is downloaded from the FTP server to the new folder.
• The software image is distributed to the Secondary CTM and LMs and updated on the CTM
flash memory on those processors.
• The NE database is copied from the old software image folder and synced to the Secondary
CTM.
• The CTM boot image path is set to the new folder on all the modules.
• The old software image folder is left in place until deleted by the user.
The Primary CTM sequentially restarts the application subsystems and then sequentially restarts the
subsystems on all the LM and itself simultaneously. This upgrades the subsystem software on all
the cards.
The upgrade is always executed on the primary CTM regardless of where the primary CTM resides
(A-CTM1 or A-CTM2 for the 5410 Switch or A-CTM or C-CTM for the 5430 Switch).
The CTM supports three software images for optimal system performance. Old packages should be
removed by the administrator using the CLI.
Software packages are copied onto the standby CTM after it has been unarchived and validated on
the Primary CTM. In the event that the software has already been unarchived/validated and a CTM
switch activity occurs before the Switch to Upgrade command has been invoked, the upgrade cannot
be continued to the newly active CTM. However, the upgrade process can be restarted from the
newly active CTM.
MAIN 1
AUX 1 Main
MAIN 2
AUX 2
WAN
5430-09100
Aux
This chapter describes the features of the Ciena® 5400 Reconfigurable Switching System Base
Mesh Infrastructure Software Package that expand beyond the 5400 Switch Base Software. The
5400 Reconfigurable Switching System is hereinafter referred to as 5400 Switch.
The features of the 5400 Switch Base Mesh Infrastructure Software Package are described in terms
of Operation, Administration, Maintenance, and Provisioning (OAM&P) functionality. The feature
descriptions are organized into the four key network management functional areas of fault
management, configuration/inventory management, account and security management, and
performance management.
• Fault management (below) - In addition to the 5400 Switch Base Software Fault
Management features (Page 29), the 5400 Switch Base Mesh package contains Subnetwork
Connection (SNC) diagnostics (Page 75), which report the cause of SNC-related failures.
• Configuration and inventory management (below) - In addition to the 5400 Switch Base
Software Fault Management features (Page 43), the 5400 Switch Base Mesh package has
the following:
• Optical Signaling and Routing Protocol (OSRP on Page 76), which provides the
autodiscovery and autoprovisioning features that provision and configure the 5400 Switch
equipment, termination points, cross connects, and protection groups.
• Mesh restoration which re-routes the connections using any spare capacity within the
network, provided through FastMesh software (Page 86)
• Account and security management - The 5400 Switch Base Software Account and Security
Management features (Page 67) also apply to the 5400 Switch Base Mesh package.
• Performance management (Page 95) - In addition to the per 5400 Switch Base Software
Performance Management features (Page 69), the 5400 Switch Base Mesh package
performs drop-side path performance monitoring.
Configuration/Inventory Management
The 5400 Switch Base Software Fault Management features (Page 43) also apply to the 5400
Switch Mesh and the Hybrid packages. The Mesh and Hybrid packages add OSRP and FastMesh
Software to the configuration management features.
OSRP
OSRP applies the principles of connection-oriented technology to the problems associated with
provisioning and protection/restoration switching. OSRP automates the necessary but
time-consuming operational tasks of connection provisioning and grooming in optical networks.
OSRP components include the following:
• An optical routing protocol that gathers and disseminates network topology and resource
usage information to all 5400 Switch NEs in a connected network.
• Call processing intelligence in the 5400 Switch software computes (or receives from the
system) optimal routes for connections within the constraints of the parameters of the individual
connection.
• An optical signaling protocol to convey the connection information from 5400 Switch node to
5400 Switch node along the route furnished by call processing. The required cross connections
are established in each 5400 Switch node so that traffic can then flow from end to end.
OSRP allows networked 5400 Switch nodes to communicate, share topology information, and
calculate routes for individual connections. OSRP is a key technology component behind FastMesh
Software (Page 86) and automatic circuit provisioning capability. OSRP features include:
• Link Aggregation (below)
• Constraint-Based Routing (below)
• Network (Topology) Autodiscovery on Page 79
• End-to-End Provisioning on Page 80
• Reversion and Reversion Timer on Page 81
• Max Admin Weight on Page 82
• Manual Switch and Regroom on Page 82
• Retry Policy on Page 82
• Multiple Protection Bundle ID on Page 82
• Hooks for Global Optimization on Page 84
• OSRP Administration on Page 84
Link Aggregation
OTN and SONET/SDH link aggregation facilitates management of large-scale networks to improve
network scalability and performance. Multiple parallel lines between adjacent 5400 Switches/
CoreDirector Switches can be aggregated into a single link from the routing and signaling
perspective. The OTN and SONET/SDH link aggregation combines up to 20 OTN or SONET/SDH
OSRP links into a single object called a Link Termination Point (LTP) that a 5400 Switch and the
network treat as a single connection. Each link within the LTP starts at a common node and ends at
the same neighboring node. The LTP requires less time and resources to manage than the
corresponding links that the LTP include.
Constraint-Based Routing
OSRP supports a constraint-based routing algorithm that is used in both the automated routing and
provisioning process and in end-to-end mesh restoration. The route is initially computed as part of
the connection provisioning process. A Dijkstra constraint-based routing algorithm attempts to route
a connection along a path of the least delay or administrative cost/weight, subject to a set of user-
defined constraints. Each OSRP routing link delay can be measured or can be assigned an
administrative weight ranging from 1 to 65,535, providing a wide range of values for supporting
various cost accounting strategies. All route computations attempt to minimize overall delay or cost
and some other constraints described on the following pages.
Other routing constraints and connection parameters are considered as well, providing finer control
over how an optical connection is routed. These constraints include SNC connection type, user-
specified explicit routes, and reversion and reversion timer as explained in the following paragraphs.
Figure 3-1 shows a simple example of constraint-based optimal routing, considering administrative
cost. Even though a path with fewer hops was available, the administrative weights on the OSRP
links resulted in the connection taking a 3-hop path through the network.
OSRP Link D
Delay: 30 µs
OSRP Link A Admin Cost: 1000
Delay: 90 µs
Admin Cost: 800
NE 3 OSRP Link C NE 4
X Delay: 30 µs OSRP Link F
Admin Cost: 1000 Delay: 30 µs
Admin Cost: 500
NE 1
OSRP Link B
Delay: 70 µs Y
Admin Cost: 500
OSRP Link E
Delay: 40 µs
Admin Cost: 2000
NE 5
NE 2
Minimum Delay Route NE 1 > NE 2 > NE 5
Delay: 70us+40us=110us
Latency-based Routing
As an alternative to selecting a route based on administrative cost, the 5400 Switch can select the
route for an individual OTN SNC based on the total latency of a given path. When a user selects
latency-based routing for an SNC, it will be assigned the route with the least latency, rather than the
lowest administrative weight.
The 5400 Switch node calculates the latency of each available path as the sum of a link latency value
for each hop, plus 15µs of latency for each node including the originating and terminating nodes. To
measure the latency of individual links, hardware can automatically ping supported peer nodes with
an in-band, non-intrusive signal using a bit in the ODUk overhead. This automatic measurement is
repeated every 15 minutes, or when manually triggered by a user. If a node does not support latency
measurement, the 5400 Switch node provides a default value which the user can override.
The latency for an OSRP link is the maximum latency of all lines in the link. Link latency, not line
latency, is used for SNC path latency calculations.
An optional, user-configurable Maximum Latency restriction prevents an SNC from choosing a route
that exceeds a specified latency threshold. If no path can be found that meets the Maximum Latency
restriction, the SNC remains in the Starting State until a path can be found. A user can configure
Maximum Latency separately for each SNC.
Figure 3-2 illustrates the selection of a route from endpoint X to endpoint Y to satisfy a configured
maximum latency of 200µs.
• The route with the lowest administrative cost follows the path NE1:LinkA > NE3:LinkD >
NE4:LinkF > NE5, for a cost of 2300 but a delay of (90+ 30 + 30 + 4*15)µs, or 210µs.
• The route with the least latency follows the path NE1:LinkB > NE2:LinkE > NE5, for an
administrative cost of 2500 but a delay of (70 + 40 + 3*15)µs, or 170µs.
OSRP Link D
Delay: 30 µs
OSRP Link A Admin Cost: 1000
Delay: 90 µs
Admin Cost: 800
NE 3 OSRP Link C NE 4
OSRP Link F
X Delay: 30 µs
Delay: 30 µs
Admin Cost: 1000
Admin Cost: 500
NE 1
OSRP Link B
Delay: 70 µs
Y
OSRP Link E
Admin Cost: 500 Delay: 40 µs
Admin Cost: 2000
NE 5
NE 2
Minimum Delay Route NE 1 > NE 2 > NE 5
Delay: 70+40=110 µs
5430-10053
End-to-End Provisioning
Automatic end-to-end circuit provisioning allows optical capacity to be delivered simply and quickly
across a network of 5400 Switch nodes. The 5400 Switch software automatic circuit provisioning
model creates circuits based on ingress end point, egress end point, capacity requirements, and
optional service parameters. This model saves significant time and effort over the traditional model
of manually configuring each system along a particular circuit path.
Note: The maximum range for SNCs is 19 hops or 20 nodes; that is, an SNC can traverse 20
nodes, including originating and terminating nodes.
Connections originate at an originating node where an optimal route is computed. After a connection
is routed, a connection is provisioned using a signaling protocol and cross connects are made along
the route. Figure 3-3 illustrates how a connection is configured and provisioned. This example
shows a request to connect Endpoint X and Endpoint Y. The connection is configured at 5400 Switch
node B. OSRP computes the route and automatically creates cross connects on nodes B, C, and E
as it sends traffic to the destination port.
C D
X
B
Y
A E 5430-09130
Max Delay
For latency based routing, Max Delay enables the user to assign a numerical value to an SNC,
indicating the maximum acceptable total delay of paths the connection can be routed on. Any
attempt at establishing the SNC on a path with a total delay over the assigned maximum is rejected.
An exception is made for the case of an SNC with assigned working and protect user-defined
Designated Transit Lists (DTLs) to be used exclusively. In this case it is assumed that the user is
forcing certain paths with disregard of Max Delay; therefore, setting a Max Delay is not permitted.
Retry Policy
During the provisioning process, it is possible that capacity is not available in the network or that the
routing constraints cannot be met at that current time. In these situations, OSRP continuously
attempts to establish or re-establish the connection, using an exponential back-off timer that starts
at 1 second and ranges up to 30 seconds. The connection is continuously attempted until the
connection is successfully provisioned or until the operation is cancelled.
• Working DTL - The Working DTL is used during SNC creation and during regroom operations
to determine or influence the home route of an SNC. A DTL list must contain at least a Working
DTL and is identified by its position as the first in the DTL list.
• Hierarchical Protect DTL - Hierarchical Protect DTLs represent a hierarchy or prioritized list of
routes to try in the event of a mesh restoration. In the event of a mesh restoration, the SNC
attempts each valid hierarchical protect DTL from the list looking for a path to restore on.
Hierarchical Protect DTLs defined in a given DTL list can range from 0 through 19. Both
Hierarchical and Associated Hop DTLs cannot be combined in a DTL list.
• Associated Hop Protect DTL - Associated Hop Protect DTLs are used to identify a restoration
path which is relative to a particular link failure within the network. An Associated Hop DTL
combines a defined restoration path, in the form of a DTL, with a specific hop in the network. In
the event of a mesh restoration, the release message identifies the blocked link in the network.
The DTL list is consulted for a match between this blocked link and one of the Associated Hop
DTLs in the list. Given a match, an attempt is made to restore the SNC using the specific
Associated Hop Protect DTL. Associated Hop Protect DTLs defined in a DTL list range from 0
through 19. Both Hierarchical and Associated Hop DTLs cannot be combined in a DTL list.
• Manual Switch to Protect - The Manual Switch to Protect DTL is used to identify a user
specified route for an SNC during manual switch to protect operations. This allows the user to
specify a route which is diverse from a specific link or multiple links in the network. The manual
switch to protect DTL is identified by a type field assigned to the DTL. Only single Manual
Switch to Protect DTL may be specified in a DTL set.
• Pre-Computed Protect DTL - The Pre-Computed Protect DTL represents the least cost path
which is bundle diverse from the Current Route DTL of a given SNC. If an SNC is not currently
on its Home Route and the Home Route is available, the Pre-Computed Protect DTL is
assigned the route of the Home Route. The Pre-Computed Protect DTL is added to the DTL list
of all high priority SNCs and is updated by the 5400 Switch software every 30 seconds. For an
SNC which fails while on its Home Route, the Pre-Computed DTL is placed last in the DTL list
hierarchy. For an SNC which fails while on a route other than its Home Route, the Pre-
Computed DTL is placed first in the DTL list hierarchy. A single Pre-Computed DTL is
calculated for high priority SNCs and is used in conjunction with both Hierarchical and
Associated Hop Protect DTLs.
• Current Route DTL - The Current Route DTL identifies the path which an SNC is currently
routed on.
• Home Route DTL - The Home Route DTL identifies the path where an SNC was originally
created and represents the path to which it will try to revert.
The use of the Exclusive flag and the contents of the DTL Set identifies three types of applications.
These three applications are referred to here as Exclusive Home and Protect, Exclusive Home, and
Preferred routes.
Note: Regroom is possible. However, the SNC will regroom only to a user specified
Working-DTL route.
Note: Regroom is possible. If the SNC is currently not on the user specified working-DTL and
this route is available, then the SNC regrooms to user specified working-DTL route. If
the user specified working-DTL is not available, then the SNC regrooms to any better
route available through routing.
Note: If a Working DTL is specified for an Exclusive-DTL SNC that is mesh restorable, then
the SNC attempts establishment on only that DTL. It attempts restoration on
automatically computed and/or Associated Hop DTLs. To retain the same functionality
of an Exclusive-DTL SNC with only one DTL specified, the user provisions permanent
SNC, P-SNC.
OSRP Administration
OSRP administration consists of the following areas:
• Switch Name - The switch name is a user-configurable, alphanumeric string of up to 63
characters. Contained within the OSRP routing message is the Information Group (IG). The IG
is particularly useful for the purpose of switch naming; for example, all switches residing in Los
Angeles could have user-friendly, grouped names beginning with LA.
• OSRP Node ID - The OSRP node ID is a 22-byte unique identifier of the node represented as a
managed object name. Generally, Node IDs are used internally for the routing portion of the
protocol. Node IDs are especially useful to the user when looking for routing SNCs using
routing profiles.
• IP Address - The Internet Protocol (IP) address is a 32-bit address defined by the IP, which is
represented in dotted decimal notation.
• Subnet Mask Address - The subnet mask address is the portion of a network that shares a
network address with other subnets, but is distinguished by a unique subnet number.
• Link Termination Point Label - The link termination point label is a user-configurable,
alphanumeric string of up to 63 characters that identifies the originating link end point.
• Bundle ID - Bundle ID is the identification used to bundle OSRP links that are likely to be
impacted by a single span failure. Bundle ID is used to determine a physically diverse
protection path. A connection’s diverse protection path attempts to avoid any OSRP links that
share the same Bundle ID as the working/active route. This is a read/write field. The default is
zero, which indicates that a link does not belong to any protection bundle. Multiple Protection
Bundle ID on Page 82 provides additional information.
• Administrative Weight - The administrative weight is a numerical value of cost assigned to an
OSRP link for routing purposes and set by the network operator. It is used to indicate the
relative desirability of using a link or node for a network operator’s purpose. The administrative
weight (value) from the advertising node to the remote end of the OSRP entity, or the reachable
address, or transit network is for the specified service categories. The higher the administrative
weight, the higher the cost of routing over that link. Range is 1 to 65,535; default is 5040. When
the administrative weight is updated on a remote node, the eventual discovery of that change
causes this attribute to automatically update on the local node. The attribute change generates
a notification only on the node where the weight was changed.
Although the framer has the capability to program the use of any specific bytes in the TOH, network
management has simplified the management by creating four user-configurable groups of bytes and
enabling the manageability of two of these groups for in-band OSRP.
When multiple wavelengths or facilities exist between two neighboring 5400 Switch nodes, OSRP
uses a round-robin algorithm (Glossary on Page 229 provides more information) on the GCC or
DCC. This provides redundancy across the span in the event of a facility failure.
FastMesh Software
Transport networks rely on mesh restoration to automatically re-route connections after a failure.
Unlike linear Protection Switching systems that rely on dedicated, redundant capacity, mesh
restoration re-routes the connections using any spare capacity within the network. A failed
connection is re-routed over the next best available path, operating on unused or reserved
bandwidth. By providing fast, efficient, end-to-end service restoration, mesh restoration allows the
carriers to maximize bandwidth efficiency. The connection-level mesh protection provided by the
5400 Switch software is called FastMesh.
FastMesh automatically re-routes individual connections by priority, giving the more critical
connections preferential treatment. During catastrophic failure in which not all connections can
survive, mesh restoration provides graceful service degradation. FastMesh re-routes circuits that
are still affected by a failure using any suitable path available in a 5400 Switch/CoreDirector Switch
network. FastMesh restoration also provides low-priority traffic restoration after the low-priority traffic
gets bumped from the protection bandwidth during a linear or ring protection switching event.
Reversion
FastMesh restoration can be configured as revertive in behavior. After a working path is calculated
and provisioned for a connection, that path behaves as an exclusive route. Multiple failures in the
network may cause the connection to reroute along alternate mesh paths, but the original
provisioned working path remains the same. This enables the 5400 Switch node to revert the
connection back to the original working path.
Bandwidth Pre-allocation
Bandwidth Pre-allocation enhances Subnetwork Connection (SNC) setup performance by allowing
bandwidth to be reserved across the path from originating to terminating node to ensure that the
bandwidth will not be taken by another circuit being provisioned or being mesh-restored.
Each interface reserves the amount of bandwidth (not actual time slots) during the setup phase. The
actual time slot selection and switch programming still occur during the connect phase. If multiple
setup messages are trying to oversubscribe a link, most messages get rejected as soon as the first
few setup messages have exhausted the bandwidth. Bandwidth contention is solved using a high-
low bandwidth allocation strategy based on the node ID.
During the setup phase, provisional connections are set up, and during the connect phase, the
provisional connections are committed to make real connections. The provisional connections act
as a placeholder to help fast-release. Provisional connections do not have any associated time slots.
Oversubscribing the available bandwidth with provisional connections is allowed as long as the total
bandwidth allocated to real connections does not exceed the line size.
NE Configuration
Mesh Restoration on Signal Degrade can be provisioned at the node level at the OSRP Defaults
Node Manager screen. This feature is enabled or disabled, and the Raise/Clear seconds are
provisioned at the node level. The default values for Raise and Clear are 30 seconds and 10 minutes
respectively. This feature can either be enabled or disabled for the entire node. The default setting
is disabled.
The node level settings are inherited by the newly created TTPs. These values are persisted and
are available across a NE Reset, NE Reboot and NE Upgrade.
5430-09104
All selection decisions are made by the drop port without regard to the opposite directions path
around the ring. In other words, all selection decisions are unidirectional. Because of this behavior,
no communication protocol is needed to pass information among members of the ring. Also,
because traffic is bridged at the entry node on both rings, the full capacity of one ring protects the
full capacity of the other ring. Thus, there are no time slots available to carry extra traffic.
UPSR/SNCP Capabilities
• The 5430 Switch implementation of UPSR/SNCP is interoperable with all equipment in
compliance with Telcordia GR-1400/ITU G.841.
• All protection switches occur within 50 ms following detection.
• Line rates from 155/622M up to 10G are supported
• A single UPSR/SNCP ring can contain up to 16 nodes.
• A single NE can support up to 128 UPSR/SNCP rings.
• Ring interconnections within the 5430 Switch are supported.
• All UPSR/SNCP rings require that the east transceiver and receiver ports reside on the same
Optical Module (OM) and that the west transceiver and receiver ports reside on the same OM.
This is typical to normal situations.
• There is no limitation to the number of add/drop ports on a UPSR/SNCP ring. In an OC-192/
STM-64 ring, it is possible to create 192 different add/drop connections to 192 separate add/
drop ports.
• UPSR/SNCP paths are not modified or terminated.
• UPSR/SNCP connection and switching granularity is at the STS-1 or VC-3 level.
• Time slots for a particular connection are the same on both the working and protect path
around the ring for all nodes in the ring. In other words, only cross connects connecting the
same set of time slots are allowed.
• Basic bridging and selecting is supported.
• Pass-through connections are supported.
UPSR/SNCP Requirements
• Auto-cross connects are enabled with nine primary SMs for a 5430 Switch. Normal UPSR/
SNCP rings are supported by the normal SM protection scheme.
• Drop-terminated connections are not possible because the 5430 Switch is currently not Path
Terminating Equipment (PTE).
• Orderwire channels are not supported.
• Connections cannot be dual homed or broadcast (as defined by GR-1400/ITU G.841, meaning
that subtended NEs cannot have their East and West ports connected to different 5430
Switches).
• Existing connections cannot exist when creating new UPSR/SNCP groups.
• The protected/unprotected state of the CTP determines whether the connection type is a
required field for entry.
• Only bridge and selected CTPs are specified as protected; all other CTPs are unprotected.
This field is write-enabled until a cross connect is made.
• The CTP provisioning mechanism includes Signal Fail (SF) and Signal Degrade (SD)
integration and decay intervals similar to the AIS-P, LOP-P, and RDI-P thresholds.
• SF-P and SF-D both have an alarm inhibit flag similar to the existing AIS-P and LOP-P alarms.
The default value is Inhibited.
• SNCs are not provisionable on a UPSR/SNCP ring because of the complexities of OSRP.
• SNCs are able to originate or terminate to a ring port if the path is being dropped. No new SNC
provisioning information is required.
Facility Criteria
Configuring SF-P and SD-P thresholds is independent of UPSR/SNCP. Thresholds are treated
similarly to the line level configuration of SF-L and SD-L on the Trail Termination Point (TTP), but
are configured on a CTP basis.
Order-of-magnitude levels of SD-P is applied in making automatic switching decisions.
All line or MS-level defects will be treated as a failure on every path with the exception of line Bit
Error Rate (BER). SF-L and SD-L will not be treated as SF conditions because SF-P and SD-P are
more accurate and precise.
The following switching criteria are supported for UPSR SONET:
• AISP, LOPP, UNEQP
• SF-P
• PDI-P
• SD-P
The following switching criteria are supported for SDH/SNCP:
• SNC/I:
• AIS-P, LOPP
• SNC/N:
• AISP, LOPP, UNEQP, TIMP
• SF-P
• SD-P
Equipment Criteria
CFP/SFP/XFP and line module pulls and failures result in a protection switch in both directions.
Working and protection ports should be on different line modules for better equipment protection, but
this separation is not required.
Administrative locks of the Optical Module also result in a protection switch in both directions (a
bidirectional switch).
5430-09106
If the system detects that a connection is being made from a UPSR/SNCP port to an add/drop port,
a bridge and selection is made. For all connections added to the UPSR/SNCP ring, a bridge is made
from the add/drop port to both the east and west UPSR/SNCP ports. For all connections dropped off
the UPSR/SNCP ring, a selection must be made to specify the east or west path. The cross connect
defines which time slots are bridged and which time slots are selected as well as the concatenation
information for both the east and west ports. The time slots being bridged are identical in both
directions.
Add/
Drop
Port
West
UPSR
Port
5430-09118
Add/
Drop
Port
West
UPSR
Port
5430-09120
5430-09118
5430-09121
Note: Historical alarm does not show the correct selectors after SNCP auto-switch on a SNCP
MR. When the SNCP work leg mesh restores, the work path is deleted and a new work
leg is created. In this process, the PU is disabled and then is re-enabled. When the PU
is enabled, monitoring starts on the previous and new active paths. Leg removal does
not count towards the previous path update.
• Path PM data collection is enabled by default for all CTPs for 15-minute periods.
• TCAs are enabled by default for SNC CTP originating and terminating drop-side Far End Path.
• SNC reroutes do not affect the originating drop-side CTP Path PM collection and Path TCA
settings.
• When an SNC reroutes, the terminating drop-side CTP Path PM collection and Path TCA
settings reset to their respective default values (enabled).
• SNC reroutes do not clear the originating and terminating CTPs historical PM data unless PM
collection was explicitly turned off prior to the reroute.
• The originating drop-side CTPs PM and TCA settings are retained when upgrading from a prior
software release.
• The Failure Count (FC) path threshold setting default is seven to provide earlier problem
indications.
Overview
This chapter provides descriptions of the 5410 Reconfigurable Switching System chassis (5410
Switch on Page 112) and the 5430 Reconfigurable Switching System system rack and chassis
(5430 Switch, below). The rack and chassis provide power distribution, lighted system operation
indicators, and backplane communication for the 5400 Switch modules. The hardware for the 5430
Switch is described first, followed by descriptions of the 5410 Switch hardware. The 5400
Reconfigurable Switching System is hereinafter referred to as 5400 Switch.
The removable system modules are described in Chapter 5, 5400 Switch Hardware Modules on
Page 123.
5430 Switch
The 5430 Switch assembly consists of a 7-foot rack (Figure 4-1) with various removable
assemblies. Removable metal doors and a fixed metal door on the front protect system components
and optical cabling.
The system rack provides a mechanical structure to support a power distribution unit (PDU) shelf,
fan shelves, and shelves for the system modules. The rack also provides a display panel, electrical
backplane, and Input/Output (I/O) connectors for interfacing the 5430 Switch to other network
elements. The system rack has:
• 5430 Switch Shelves on Page 98
• 5430 Switch Power Distribution Unit on Page 101
• Display Panel on Page 104
• 5430 Switch Connectors on Page 106
• Fan Tray Assembly on Page 106
• Input/Output Panel on Page 109
• Backplane on Page 112
PDU Shelf
Assembly
Display Panel
Upper
Fan Shelf
Shelf A
Line Module/
CTM Shelf
Shelf C
Line Module/
CTM Shelf
Lower
Fan Shelf
CFUA-1 CFUA-5
A-CTM
Shelf A
Shelf B
C-CTM
Shelf C
CFUB-1 CFUB-5
5430-09073
B-Side
PDU-A
5430-09009
Rear Vie
There are two PDU configurations (Figure 4-4) available for the 5430 Switch:
• Fused Disconnect
• Breaker Disconnect
The PDU provides monitored protection and distribution of circuits from two -48 VDC facility power
sources (Power A and Power B). The Power A source provides eight feeds to PDU-A. The Power B
source provides eight feeds to PDU-B. The fused version PDU is designed for use with 60A Telecom
TLS (or TPS) fuses. The fuses are mounted in compliant panel mounted holders for easy access.
The circuit breaker version PDU uses 60A magnetic type DC circuit breakers. PDU-A and PDU-B
are each capable of supporting the worst-case full load current of 384A.
The eight Power A feeds connect to the rear of PDU-A and the eight Power B feeds connect to the
rear of PDU-B. These power feeds are connected to the stud terminal blocks located on the rear of
the PDU. Protective covers are installed over the terminal blocks.
Feed jumper kits are available for the 5430 Switch partial power configuration. The partial power
configuration can be used in locations where:
• the full capacity of the 5430 Switch is not initially required (can be upgraded later to full power)
• dual feed kit if only up to 0.48 T bps switching is required
• quad feed kit if only up to 1.4 T bps switching is required
• locations that can not supply the full amount of power feeds
• locations that are charged on a per power feed basis
• lab locations
The fully configured 5430 Switch requires eight power feeds to each PDU for a system total of
sixteen power feeds. Installing power feed jumpers reduces the number of total power feeds from
sixteen feeds to eight feeds or four feeds. The Ordering Guide on Page 161 provides ordering
information.
The PDU is designed with soft start circuitry to limit inrush current upon application of -48 volts. This
circuitry allows the filter capacitors to charge slowly for approximately five seconds after initial power
is applied. After power is removed, it takes less than one minute for the capacitors in the panel to
discharge and the soft-start circuitry to completely reset.
-48 VDC facility power feeds are filtered and routed to the equipment shelves through connectors by
way of two power harnesses for PDU-A and two power harnesses for PDU-B.
The PDU functions include the following:
• Redundant over-current protection for 16 feed circuits (eight A side (PDU-A), as well as eight B
side (PDU-B)) using magnetic circuit breakers or field replaceable fuses.
• System level over voltage transient protection from the CO environment.
• System level surge current protection from the CO environment.
• System level common mode filtering to control emission levels and to insure conducted
immunity from the CO environment.
• System level three pole differential mode filtering to control emission levels from the 5430
Switch as well as to insure conducted emission immunity for the rack.
• The PDU works with subsystem modules to filter and meet all conducted emission compliance
requirements.
• Soft-start feature to limit the current surge otherwise caused by the PDU differential mode filter
capacitor charging circuits.
• The PDU is designed with two independent filter modules that are hot swappable and each
module is capable of supplying power to all modules and fan tray assemblies in the 5430
Switch.
• The PDU consists of two independent power/filter modules that mount in a tray from the rear of
the 5430 Switch and plug into cable assemblies that attach to the two fuse and alarm indicator
fan backplanes.
Power Distribution
The PDU receives -48 VDC power inputs (Power A and B) from the facility Battery Distribution Fuse
Bay (BDFB). The DC power wiring attaches to the PDU terminal blocks on the rear of the 5430
Switch rack/chassis (Figure 4-4). The PDU has safety grounds, which are isolated from the -48 VDC
and returns.
The BDFB Power A provides power to the PDU-A inputs, and Power B provides power to the
PDU-B inputs. Each input (PDU-A feed 1 through 8 and PDU-B feed 1 through 8) goes through a
60-ampere circuit breaker or fuse in the PDU. The circuit breakers or fuses function as on/off
switches and protect the source from overcurrent conditions in the 5430 Switch. The CTMs sense
and report a tripped circuit breaker.
Either the PDU-A inputs (A1 - A8) or the B inputs (B1 - B8) can supply all required power to a fully
populated 5430 Switch. Table B-7, 5430 Switch TDM Module Power Specifications on Page 210
provides the circuit pack and module power requirements. Table B-9, 5430 Switch Power Feed
Slot Matrix on Page 211 describes the power, current, and quantity/type of module each feed
supplies.
In Table B-9, 5430 Switch Power Feed Slot Matrix on Page 211, LM slot numbers A1 through A15
are for the upper LM shelf. LM slot numbers C1 through C15 are for the lower LM shelf. Fan tray
assemblies CFUA-1 through CFUA-5 are the upper fan trays and fan tray assemblies CFUB-1
through CFUB-5 are the lower fan tray assemblies.
Each CTM has A-battery and a B-battery voltage sense circuits for feeds A1, B1, A8 and B8. The
voltage sensing function is based on the slot position in the 5430 Switch and therefore, is
independent of whether a control and timing module is in the primary or secondary role in the switch.
When one of the voltage sense circuits is active, a PDU Alarm is generated.
The -48 VDC inputs and returns are ORed together on the system modules; therefore, if either feed
fails, power is supplied automatically to the module by the other feed.
On each module (PDU-A and PDU-B), a power in-rush circuit (hot-swap circuit) allows the -48 VDC
to ramp up slowly when a module is inserted. This prevents current surge problems on the -48 VDC
bus due to excessive in-rush currents.
Display Panel
The display module on the top front of the 5430 Switch provides system-level error indications, PDU
feed circuit status indications, internal PDU fault indications, and an alarm shutoff and indicator test.
The display panel displays node alarm status (critical, major, minor), feed circuit status, and PDU
internal circuit fault using LEDs in the front PDU tray mounted display module.
held for three seconds or longer, it toggles the display panel bicolored LEDs between red and green
every second. The pushbutton lights blue when the alarm cutoff is active. Activation of this switch
does not affect the visual alarms. A new alarm condition reactivates the proper audible alarm,
regardless of the state of the ACO switch, and turns off a lit ACO indicator light. The new alarm can
be silenced by pushing the button again.
ACO
LED
5430-09070
The indicators are activated by the control and timing modules. Except for power failure or card
removal, LED operation is controlled by software and controlled by hardware on CTM controls feed
indicators for feeds 1 and 8.
Table 4-1 describes the indicators on the PDU.
5430-09070
Each fan tray assembly contains four fan impellers, a backplane interface card and a fan interface
card. The backplane interface card provides input power fusing, transient suppression, hot-swap/
inrush control, filtering, fan speed control, and power feed alarm processing functions. The fan
interface card provides power conversion circuitry, the fan speed control circuitry, and the fan fault
response and alarm processing circuitry. The fan interface card contains two independent sets of
power conversion and fan control circuitry, with one set supporting impellers one and three and the
other set supporting impellers two and four. This interleaving of impeller control is done for fault
tolerance; in the event of a power circuit or controller failure, one front and one rear impeller would
still be operational within the fan tray assembly.
The CTM controls fan speed based upon input from temperature sensors located on all modules in
the chassis. In the event of a fan tray assembly failure or removal, the speed of the other fan
assemblies can increase to compensate as required. A rotation sensor in the fan enables the CTMs
to detect fan failure. As a fail-safe feature, fans enter an autonomous mode when control input is not
received from the active CTM. In this mode, fans speed increases while the fan trays detect an over
temperature condition. Fan speed returns to normal when the CTM establishes communications with
the fan and environmental conditions permit. At system turn-up, fans run at 4500 rpm.
Each fan unit has a bi-color (red/green) LED (Table 4-2).
CFUA-1 CFUA-5
CFUB-1 CFUB-5
5430-09072
Input/Output Panel
The 5430 Switch I/O module is mounted at the rear of the 5430 Switch chassis. It provides electrical
connectors for all system I/O signals. The T1 I/O module has a metal protective cover that when
removed, exposes the BITS-1 and BITS-2 wire wrap pins for access. Connections are made from
the I/O module to the backplane by board-stacking connectors. The I/O signals are distributed to the
modules by the backplane.
Two types of rear I/O modules are available:
• The DS1 (T1) version (Figure 4-8) supports the Telcordia I/O requirements for SONET
communications.
• The E1 version (Figure 4-9) supports the ITU requirements for SDH communications.
SHIELD
RING
TIP
SHIELD
RING
TIP
SHIELD
RING
TIP
SHIELD
RING
TIP
Console Port
DCN Ports
BITS-1 BITS-2
BITS 1
Alarm
BITS 2
5430-09074
Console Port
DCN Ports
BITS 1
Alarm
BITS 2
5430-09075
The T1 I/O module provides DS1 functionality and the E1 I/O module provides E1 functionality
The only physical difference between the two is that the DS1 (T1) I/O module has four pin wrap pins
for the timing interface to the Building Integrated Timing Supply (BITS) connectors, and the E1 has
replaced those connectors with four BNC timing interface connectors.
The I/O module has the following features:
• Connectors for rear access to external electrical I/O
• 10/100 autonegotiating Ethernet ports (DCN1 MAIN, DCN1 AUX, DCN2 MAIN, and DCN2
AUX)
• RS-232 Data Terminal Equipment (DTE) (Console port CON_PORT_1)
• Alarm I/O port
• DS1 (T1) I/O module only: Four sets of three-pin Wire Wrap headers (in and out BITS1 pins
and in and out BITS2 pins. The BITS input signals are distributed to both control and timing
modules.
• E1 I/O module only: Four 75-ohm BNC coaxial connectors for E1 timing interfaces.
The 5430 Switch I/O module is a field-replaceable unit.
which connect to A-CTM and DCN2 Main and DCN2 AUX which connect to C-CTM. The far-
end Ethernet port connected to this port must be set to autonegotiate for speed and duplex
operation.
• BITS (timing interface) connectors to provide input for an external timing source. The T1 I/O
module uses 4 three pin wire wrap connectors. The E1 I/O module uses four BNC connectors.
• RS-232 connector to provide a communications path for Craft Interface dial-up access to the
5430 Switch.
5430-10011
• Provide over and under voltage protection for External Fuse fail alarm interfaces
• Provide shielding and for cable shield termination to limit EMI radiation from cable
Backplane
The 5430 Switch backplane provides electrical interface and fused power for the system modules.
The I/O connections on the backplane provide communication paths between the modules and the
I/O module. The 5430 Switch backplane also provides the interface connections required for the I/O
module and the DC power cables from the PDU.
The backplane provides fused power to each CTM, LM, SM, and fan slot. Fuses are located at the
rear of the 5430 Switch, behind the fan and power input section back plane covers.
The backplane has the following features.
• Support for all intramodule electrical interconnections for data, timing, and control
• Connection of external electrical I/O from the I/O module to the modules
• Electrically Erasable Programmable Read Only Memory (EEPROM) storage for manufacturing
data, accessible by the control and timing module
• Dual -48 VDC power distribution
• Enhanced fault isolation
• Individual 20 amp fuses for Feed A and Feed B to each Shelf A and Shelf C LM module
• Individual 10 amp fuses for Feed A and Feed B to each Shelf B SM module and the Shelf A
CTM and Shelf C CTM
• Individual 12 amp fuses for Feed A and Feed B to each upper and lower fan module
5410 Switch
The 5410 Switch assembly consists of a 22RU half rack chassis (Figure 4-11) with various
removable assemblies. A removable metal on the front protects system components and optical
cabling.
The system chassis provides a mechanical structure to support two fully redundant power
distribution units (PDUs), a fan shelf, and an equipment shelf for the system modules. The rack also
provides an electrical backplane and Input/Output (I/O) connectors for interfacing the 5410 Switch
to other network elements. The system chassis has:
• 5410 Switch Shelves on Page 113
• 5410 Switch Power Distribution Unit on Page 114
• 5410 Switch Display Panel on Page 117
• 5410 Switch Connectors on Page 117
• Fan Tray Assembly on Page 117
• 5410 Switch Input/Output Panel on Page 118
• Backplane on Page 121
CFU Shelf
Shelf A
PDU Shelf
5410-11004
A-CTM1
A-CTM2
A-1 A-10
A-5 A-6
A-SM1
A-SM2 Shelf A
A-SM3
A-SM4
PDU B
PDU Shelf
PDU A
IO Panel
5410-11003
Figure 4-13. 5410 Switch Power Distribution Unit (Fused PDU shown)
5410-11005
There are two PDU configurations (Figure 4-14) available for the 5410 Switch:
• Fused Disconnect
• Breaker Disconnect
The PDU provides monitored protection and distribution of circuits from two -48 VDC facility power
sources (Power A and Power B). The Power A source provides three feeds to PDU-A. The Power B
source provides three feeds to PDU-B. The fused version PDU is designed for use with 60A Telecom
TLS (or TPS) fuses. The fuses are mounted in compliant panel mounted holders for easy access.
The circuit breaker version PDU uses 60A magnetic type DC circuit breakers. PDU-A and PDU-B
are each capable of supporting the worst-case full load current of 144A.
The three Power A feeds connect to PDU-A and the three Power B feeds connect to PDU-B. These
power feeds are connected to the stud terminal blocks located on the rear of the PDU. Protective
covers are installed over the terminal blocks.
The PDU is designed with soft start circuitry to limit inrush current upon application of -48 volts. This
circuitry allows the filter capacitors to charge slowly for approximately five seconds after initial power
is applied. After power is removed, it takes less than one minute for the capacitors in the panel to
discharge and the soft-start circuitry to completely reset.
-48 VDC facility power feeds are filtered and routed to the equipment shelves through connectors by
way of two power harnesses for PDU-A and two power harnesses for PDU-B.
The PDU functions include the following:
• Redundant over-current protection for six feed circuits (three A side (PDU-A), as well as three
B side (PDU-B)) using magnetic circuit breakers or field replaceable fuses.
• System level over voltage transient protection from the CO environment.
• System level surge current protection from the CO environment.
• System level common mode filtering to control emission levels and to insure conducted
immunity from the CO environment.
• System level three pole differential mode filtering to control emission levels from the 5410
Switch as well as to insure conducted emission immunity for the rack.
• The PDU works with subsystem modules to filter and meet all conducted emission compliance
requirements.
• Soft-start feature to limit the current surge otherwise caused by the PDU differential mode filter
capacitor charging circuits.
• The PDU is designed with two independent filter modules that are hot swappable and each
module is capable of supplying power to all modules and fan tray assemblies in the 5410
Switch.
• The PDU consists of two independent power/filter modules that mount in a tray from the rear of
the 5410 Switch and plug into cable assemblies that attach to the two fuse and alarm indicator
fan backplanes.
The BDFB Power A provides power to the PDU-A inputs, and Power B provides power to the
PDU-B inputs. Each input (PDU-A feed 1 through 3 and PDU-B feed 1 through 3) goes through a
60-ampere circuit breaker or fuse in the PDU. The circuit breakers or fuses function as on/off
switches and protect the source from overcurrent conditions in the 5410 Switch. The CTMs sense
and report a tripped circuit breaker.
Either the PDU-A inputs (A1 - A3) or the B inputs (B1 - B3) can supply all required power to a fully
populated 5410 Switch. Table B-4, 5410 Switch TDM Module Power Specifications on Page 208
provides the circuit pack and module power requirements and the power, current, and quantity/type
of module each feed supplies. Table B-6, 5410 Switch Power Feed Slot Matrix on Page 209 lists
which modules and slots each feed supplies.
Each CTM has A-battery and a B-battery voltage sense circuits for feeds A1, B1, A3 and B3. The
voltage sensing function is based on the slot position in the 5410 Switch and therefore, is
independent of whether a control and timing module is in the primary or secondary role in the switch.
When one of the voltage sense circuits is active, a PDU Alarm is generated.
The -48 VDC inputs and returns are ORed together on the system modules; therefore, if either feed
fails, power is supplied automatically to the module by the other feed.
On each module (PDU-A and PDU-B), a power in-rush circuit (hot-swap circuit) allows the -48 VDC
to ramp up slowly when a module is inserted. This prevents current surge problems on the -48 VDC
bus due to excessive in-rush currents.
upward and exhausting to the rear of the unit. Intake air for the fan tray assemblies is drawn through
the air filter at the bottom, front of the 5410 Switch chassis. The assemblies operate independently
from one another and are designed to be individually hot-swappable.
Each fan tray assembly contains four fan impellers, a backplane interface card and a fan interface
card. The backplane interface card provides input power fusing, transient suppression, hot-swap/
inrush control, filtering, fan speed control, and power feed alarm processing functions. The fan
interface card provides power conversion circuitry, the fan speed control circuitry, and the fan fault
response and alarm processing circuitry. The fan interface card contains two independent sets of
power conversion and fan control circuitry, with one set supporting impellers one and three and the
other set supporting impellers two and four. This interleaving of impeller control is done for fault
tolerance; in the event of a power circuit or controller failure, one front and one rear impeller would
still be operational within the fan tray assembly.
Fan speed is controlled by the control and timing module and vary based on environmental
conditions. The temperature sensors for gathering this data are located on all modules.
In the event of a fan tray assembly failure or removal, the speed of the other fan assemblies can
increase to compensate, if needed. A rotation sensor in the fan enables the CTMs to detect fan
failure. As a fail-safe feature, the fans revert to high speed if no control input is received from the
active CTM. Fan speed returns to normal when the CTM establishes communications with the fan.
At system turn-up, the fans run at 4500 rpms and climb if the temperature climbs up to the maximum
speed.
Each fan unit has a bi-color (red/green) LED (Table 4-2).
5410-11007
Alarm
Alarm
BITS 1 BITS 2
IN OUT IN OUT
SHIELD
RING
TIP
SHIELD
RING
TIP
SHIELD
RING
TIP
SHIELD
RING
TIP
BITS-1 BITS-2
5410-11008
The T1 I/O module provides DS1 functionality and the E1 I/O module provides E1 functionality
The only physical difference between the two is that the DS1 (T1) I/O module has four pin wrap pins
for the timing interface to the Building Integrated Timing Supply (BITS) connectors, and the E1 has
replaced those connectors with four BNC timing interface connectors.
The I/O module has the following features:
• Alarm I/O port connector
• DS1 (T1) I/O module only: Four sets of three-pin Wire Wrap headers (in and out BITS1 pins
and in and out BITS2 pins. The BITS input signals are distributed to both control and timing
modules.
• E1 I/O module only: Four 75-ohm BNC coaxial connectors for E1 timing interfaces.
The 5410 Switch I/O module is a field-replaceable unit.
I/O Connectors
Each 5410 Switch I/O module has the following external I/O connectors (Figure 4-17):
• The DB-15 Alarm connector provides alarm outputs from CTM Modules to the IOM. The alarm
outputs are visual major, minor, critical alarm signals and a summary audio alarm. Ordering
Guide on Page 161 provides alarm cable ordering information.
• Four Ethernet connectors to provide a communications path to the Node Manager application
(by way of a router or switch). The Ethernet connector pinout is the standard used for RJ-45
terminal connections. The four Ethernet connectors provide redundant DCN1 and DCN2
connections to the CTMs. Ethernet connectors are designated as DCN1 Main and DCN1 AUX
which connect to A-CTM and DCN2 Main and DCN2 AUX which connect to C-CTM. The far-
end Ethernet port connected to this port must be set to autonegotiate for speed and duplex
operation.
• BITS (timing interface) connectors to provide input for an external timing source. The T1 I/O
module uses 4 three pin wire wrap connectors. The E1 I/O module uses four BNC connectors.
Alarm
Pin 8 Pin 1
Pin 15 Pin 9
5410-11009
Backplane
The 5410 Switch backplane provides electrical interface and power for the system modules. The
I/O connections on the backplane provide communication paths between the modules and the I/O
module. The 5410 Switch backplane also provides the interface connections required for the I/O
module and the DC power cables from the PDU.
The backplane has the following features.
• Support for all intramodule electrical interconnections for data, timing, and control
• Connection of external electrical I/O from the I/O module to the modules
• Electrically Erasable Programmable Read Only Memory (EEPROM) storage for manufacturing
data, accessible by the control and timing module
• Dual -48 VDC power distribution
System Interfaces
Optical Interfaces
SFPs, XFPs, and CFPs on Page 146 provides a general description of SFPs/XFPs supported by
the 5400 Switch. SFP/XFP/CFP Specifications on Page 213 provides the 5400 Switch SFP/XFP
specifications.
Note: The SFPs and XFPs are listed in the Chapter 6, Ordering Guide on Page 161. Fiber
bender kits are available for all SFP/XFP transceivers.
Chapter 5, 5400 Switch Hardware Modules on Page 123 provides detailed descriptions of the
5400 Switch modules.
Ethernet Interfaces
On the 5410 Switch CTM and 5430 Switch I/O module there are two external Ethernet ports for each
control and timing module, making a total of four interfaces. The Ethernet ports allow 5400 Switch
nodes to be connected to one or more externally switched or routed networks.
Each interface is auto-sensing for 10 or 100 Mbps operation in half-duplex or full-duplex modes. The
Ethernet connectors are shielded RJ-45 connectors and must be connected to the router or switch
using a Shielded Twisted Pair (STP) cable. For 100 Mbps operation, Category 5 or better cable is
required.
Craft Interfaces
There is one RJ-45 craft interface on each control and timing module front panel and one a DB-9
male connector on the I/O module to provide Craft access. These interfaces allow a craftsperson to
log on to a 5410 Switch or 5430 Switch for management.
CLEI Codes
The chassis and each removable module has a CLEI label that can be read by a bar code scanner.
The CLEI codes are registered with Telcordia and comply with Telcordia guidelines.
Overview
This chapter provides descriptions of the system modules available for the Ciena® 5400
Reconfigurable Switching System. All system modules are plug-in assemblies that provide various
system functions and interfaces. This chapter contains the following:
• Module LED Operation (below)
• Line Modules on Page 125
• SFPs, XFPs, and CFPs on Page 146
• Switch Module on Page 149
• Control and Timing Modules on Page 151
• LM, SM, and CTM Blanks on Page 159
Dimensions of all system modules are listed in Appendix B, Specifications and Standards.
The 5400 Reconfigurable Switching System is hereinafter referred to as 5400 Switch.
Line Modules
The 5400 Switch line modules (LMs) provide the optical interface for the switch with pluggable CFP/
SFP/XFP transceivers. An ejector and screw at the top and bottom of the module faceplate retain
each line module in the equipment shelf of the 5400 Switch. Line modules host the SONET/SDH and
OTN interfaces and provide the ingress and egress portions of the switch fabric. These modules
contain on-board processors to perform control and monitoring functions and to assist the control
and timing module in performing distributed processing tasks.
The 5400 Switch supports three types of line modules; TDM Services line module (TSLM), OTN
Services line module (OSLM), and SONET/SDH Services line module (SSLM).
The following types of 5400 Switch line modules are available:
• OSLMs:
• OSLM-3 line module for use with up to three 40G CFPs (Page 126)
• OSLM-3M line module for up to three 40G MSA based optical interfaces (Page 128)
• OSLM-12 line module for use with up to 12 10G XFPs/10 GbE XFPs (Page 130)
• OSLM-48 line module for use with up to 48 2.5G SFPs, 155/622M SFPs, GbE SFPs
(Page 133)
• TSLMs:
• TSLM-3 line module for use with up to three 40G CFPs (Page 126)
• TSLM-12 line module for use with up to 12 10G XFPs/10 GbE XFPs (Page 136)
• TSLM-48 line module for use with up to 48 2.5G SFPs. 155/622M SFPs, GbE SFPs
(Page 139)
• SSLMs:
• SSLM-12 line module for use with up to 12 10G XFPs (Page 142)
• SSLM-48 line module for use with up to 48 2.5G SFPs, or 155/622M SFPs (Page 139)
Specific features of individual line modules are described in the sections that follow.
In addition to the common line module features (Line Module Features on Page 125), the OSLM-
3 line module has the following features:
• Up to three OTU3 interfaces
• GCC insertion/termination:
• Insertion and termination of OTU3 and GCC0
• Insertion and termination of ODU3 GCC1, GCC2, and GCC1/2
• OTN switching features for ODU3, ODU2, ODU1 and ODU0
• OTU3 OSRP links
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD.
• Remote TAP for ODU3, ODU2, ODU1 and ODU0
• Connection-level loopback for ODU3, ODU2, ODU1 and ODU0
• Latency and Distance-based routing based on OTU links
• Single-stage mapping of ODU0 to ODU3, ODU1 to ODU3 and ODU2 to ODU3
• TCM functionality on all ODUk levels and associated TCMs
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
The OSLM-3 module (Figure 5-2) has one red, yellow, and green LED to indicate module status and
one red/yellow/green tri-color LED for each port to indicate port status. Table 5-3, Line Module LED
States on Page 126 and Table 5-4, Port LED States on Page 126 describe the operation of these
LEDs.
ACT
MNT 1 2 3 OSLM 3
FLT
Port 1 Status Indicator Port 2 Status Indicator Port 3 Status Indicator 5400-11003
In addition to the common line module features (Line Module Features on Page 125), the
OSLM-3M line module has the following features:
• Up to three OTU3 interfaces
• GCC insertion/termination:
• Insertion and termination of OTU3 and GCC0
• Insertion and termination of ODU3 GCC1, GCC2, and GCC1/2
• OTN switching features for ODU3, ODU2, ODU1 and ODU0
• OTU3 OSRP links
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD.
• Remote TAP for ODU3, ODU2, ODU1 and ODU0
• Connection-level loopback for ODU3, ODU2, ODU1 and ODU0
• Latency and Distance-based routing based on OTU links
• Single-stage mapping of ODU0 to ODU3, ODU1 to ODU3 and ODU2 to ODU3
• TCM functionality on all ODUk levels and associated TCMs
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
The OSLM-3M module (Figure 5-4) has one red, yellow, and green LED to indicate module status
and one red/yellow/green tri-color LED for each port to indicate port status. Refer to Line Module
Status Indicators on Page 126 for LM status indicator description.
1 2 3
ACT
MNT OSLM 3M
FLT
In addition to the common line module features (Line Module Features on Page 125), the OSLM-
12 line module has the following features:
• Up to 12 OTU2/10GbE/OC192/STM64/CBR10G interfaces
• GCC insertion/termination:
• Insertion and termination of OTU2 and GCC0
• Insertion and termination of ODU2 GCC1, GCC2, and GCC1/2
• OTN switching features for ODU0, ODU1, and ODU2
• OTU2 OSRP links
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD.
• Remote TAP for ODU0, ODU1, and ODU2
• Connection-level loopback for ODU0, ODU1, and ODU2
• Latency and Distance-based routing based on OTU links
• Multi-stage mapping of ODU0 to ODU1 to ODU2
• TCM with two out of six channels on all ODUk levels facing the optical interface and the switch
fabric
The OSLM-12 module (Figure 5-6) has one red, yellow, and green LED to indicate module status
and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-3, Line Module
LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe the operation of
these LEDs.
ACT
MNT OSLM-12
1 2 3 4 5 6 7 8 9 10 11 12
FLT
ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
In addition to the common line module features (Line Module Features on Page 125), the OSLM-
48 line module has the following features:
• Up to 48 OC-48/STM-16/GE Interfaces (OTU1 future release)
• GCC insertion/termination:
• Insertion and termination of OTU1 and GCC0 (Future release)
• Insertion and termination of ODU1 GCC1, GCC2, and GCC1/2 (Future release)
• Transparent Mapping/Demapping of OC-48/STM-16 into ODU1 payload
• Transparent Mapping/Demapping of 2.5G CBR into ODU1 payload
• Transparent Mapping/Demapping of GE into ODU0 payload
• Transparent Mapping/Demapping of ODU0 into ODU1 payload
• OTN switching features for ODU0 and ODU1
• OTU1 OSRP links (future release)
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD
• Remote TAP for ODU0 and ODU1
• Connection-level loopback for ODU0 and ODU1
• Latency and Distance-based routing based on OTU links
• Multi-stage mapping of ODU0 to ODU1
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
The OSLM-48 module as shown in Figure 5-8 has one red, yellow, and green LED to indicate
module status and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-
3, Line Module LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe
the operation of these LEDs.
ACT
OSLM-48
MNT
FLT
ACT/FLT
Module
Status Port 7 Port 8 Port 47 Port 48
Indicators Status Status Status Status
Indicator Indicator Indicator Indicator
5400-11022
In addition to the common line module features described previously, the TSLM-3 line module has
the following features:
• Up to three OTU3 interfaces
• GCC insertion/termination:
• Insertion and termination of OTU3 and GCC0
• Insertion and termination of ODU3 GCC1, GCC2, and GCC1/2
• OTN switching features for ODU3, ODU2, ODU1 and ODU0
• OTU3 OSRP links
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD.
• Remote TAP for ODU3, ODU2, ODU1 and ODU0
• Connection-level loopback for ODU3, ODU2, ODU1 and ODU0
• Latency and Distance-based routing based on OTU links
• Single-stage mapping of ODU0 to ODU3, ODU1 to ODU3 and ODU2 to ODU3
• TCM functionality on all ODUk levels and associated TCMs
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
The TSLM-3 module (Figure 5-9) has one red, yellow, and green LED to indicate module status and
one red/yellow/green tri-color LED for each port to indicate port status.
In addition to the common line module features (Line Module Features on Page 125), the TSLM-
12 line module has the following features:
• Up to 12 OTU2/10GbE/OC-192/STM-64/CBR10G interfaces
OTN:
• GCC insertion/termination:
• Insertion and termination of OTU2 and GCC0
• Insertion and termination of ODU2 GCC1, GCC2, and GCC1/2
• OTN switching features for ODU0, ODU1, and ODU2
• OTU2 OSRP links
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD.
• Remote TAP for ODU0, ODU1, and ODU2
• Connection-level loopback for ODU0, ODU1, and ODU2
• Latency and Distance-based routing based on SONET/SDH links
• Latency and Distance-based routing based on OTU links
• Multi-stage mapping of ODU0 to ODU1 to ODU2
• Transparent Mapping/Demapping of OC-192/STM-64 into ODU2 payload
• Transparent Mapping/Demapping of 10G CBR into ODU2 payload
• Transparent Mapping/Demapping of 10GE into ODU-2 payload
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
SONET/SDH:
• Framers that provide pointer processing, performance monitoring, and transport overhead
DCC insertion/termination:
• Switching to STS-1/VC-3
• SONET/SDH OSRP links
• SONET/SDH SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP
with ARD.
• Remote TAP for OC-3/STM-1 to OC-192/STM-64
• Connection-level loopback to STS-1/AU-3
• Flexible Concatenation
• Subnetwork Connection Protection (SNCP) switching criteria based on monitoring of the C2
byte; this byte carries the unequipped (UNEQ-P) designation. Subnetwork Connection
Protection on Page 88 provides more information about this feature.
The TSLM-12 module (Figure 5-12) has one red, yellow, and green LED to indicate module status
and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-3, Line Module
LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe the operation of
these LEDs.
ACT
MNT TSLM-12
1 2 3 4 5 6 7 8 9 10 11 12
FLT
ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
In addition to the common line module features (Line Module Features on Page 125), the TSLM-
48 line module has the following features:
OTN:
• GCC insertion/termination:
• Insertion and termination of OTU1 and GCC0 (future release)
• Insertion and termination of ODU1 GCC1, GCC2, and GCC1/2 (future release)
• Transparent Mapping/Demapping of OC-48/STM-16 into ODU1 payload
• Transparent Mapping/Demapping of 2.5G CBR into ODU1 payload
• Transparent Mapping/Demapping of GE into ODU0 payload
• Transparent Mapping/Demapping of ODU0 into ODU1 payload
• OTN switching features for ODU0 and ODU1
• OTU1 OSRP links (future release)
• OTN SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP with ARD
• Remote TAP for ODU0 and ODU1
• Connection-level loopback for ODU0 and ODU1
• Latency and Distance-based routing based on SONET/SDH links
• Latency and Distance-based routing based on OTU links
• TCM with two out of six selectable channels on all ODUk levels facing the optical interface and
the switch fabric
SONET:
• Support for up to 48 individually removable 2.5G SFPs or 155/622M SFPs
• Framers that provide pointer processing, performance monitoring, and transport overhead
DCC insertion/termination
• Switching to STS-1/AU-3
The TSLM-48 module as shown in Figure 5-14 has one red, yellow, and green LED to indicate
module status and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-
3, Line Module LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe
the operation of these LEDs.
ACT
TSLM-48
MNT
FLT
ACT/FLT
Module
Status Port 7 Port 8 Port 47 Port 48
Indicators Status Status Status Status
Indicator Indicator Indicator Indicator
5400-11024
In addition to the common line module features (Line Module Features on Page 125), the SSLM-
12 line module has the following features:
• Up to 12 OC-192/STM-64 SONET/SDH XFPs
• Framers that provide pointer processing, performance monitoring, and transport overhead
DCC insertion/termination:
• Switching to STS-1/VC-3
• SONET/SDH OSRP links
• SONET/SDH SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP
with ARD.
• Remote TAP for OC-3/STM-1 to OC-192/STM-64
• Connection-level loopback to STS-1/STM-0
• Latency and Distance-based routing
Table 5-18. SSLM-12 Capabilities
Supported Description
Line Interfaces • OC-192/STM-64
Optical • XFP-OPT-SR (Extended Temp) 850nm (P/N 130-4904-900)
Interfaces • XFP-OPT-LR (SR-1 Extended Temp) 10G SR-1/I-64.1 (1310nm, 2km) (P/
(See “XFPs” N 130-4905-900)
on page 175.) • XFP-OPT-ER (IR-2 Extended Temp) 10G IR-2/S-64.2b (1550nm, 40km)
(P/N 130-4906-900)
• XFP-OPT-UR (LR-2 Extended Temp) 10G LR-2c/L-64.2c (1550nm, 80km)
(P/N 130-4907-900)
• XFP - C-Band Tunable (P/N 160-9002-900)
The SSLM-12 module (Figure 5-16) has one red, yellow, and green LED to indicate module status
and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-3, Line Module
LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe the operation of
these LEDs.
ACT
MNT TSLM-12
1 2 3 4 5 6 7 8 9 10 11 12
FLT
ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
ACT/FLT ACT/FLT ACT/FLT
In addition to the common line module features (Line Module Features on Page 125), the SSLM-
48 line module has the following features:
• Support for up to 48 individually removable2.5G SFPs or 155/622M SFPs
• Framers that provide pointer processing, performance monitoring, and transport overhead
DCC insertion/termination:
• Switching to STS-1/AU-3
• SONET/SDH OSRP links
• SONET/SDH SNC, MR SNC, Permanent SNC, Signaled SNC-P, MR-SNCP, and MR-SNCP
with ARD.
• Remote TAP for OC-3/STM-1 to OC-48/STM-16
• Connection-level loopback to STS-1/AU-3
• Flexible Concatenation
• Subnetwork Connection Protection (SNCP) switching criteria based on monitoring of the C2
byte; this byte carries the unequipped (UNEQ-P) designation. Subnetwork Connection
Protection on Page 88 provides more information about this feature.
The SSLM-48 module as shown in Figure 5-18 has one red, yellow, and green LED to indicate
module status and one red/yellow/green tri-color LED for each port to indicate port status. Table 5-
3, Line Module LED States on Page 126 and Table 5-4, Port LED States on Page 126 describe
the operation of these LEDs.
ACT
TSLM-48
MNT
FLT
ACT/FLT
Module
Status Port 7 Port 8 Port 47 Port 48
Indicators Status Status Status Status
Indicator Indicator Indicator Indicator
5400-11024
• 40G CFP Serial G.693 (1550nm Serial 2km, 40GbE, OC768/STM256, OTU3) P/N:
NTTA13EEE6
• 40GE CFP ER-4 IEEE802.3ba Pluggable Module, 40KM P/N: 160-9013-900
• xSLM-12 XFPs:
• XFP-OPT-SR (Extended Temp) 850nm P/N: 130-4904-900
• XFP-OPT-LR (SR-1 Extended Temp) 10G SR-1/I-64.1 (1310nm, 2km) P/N: 130-4905-900
• XFP-OPT-ER (IR-2 Extended Temp) 10G IR-2/S-64.2b (1550nm, 40km) P/N: 130-4906-900
• XFP-OPT-UR (LR-2 Extended Temp) 10G LR-2c/L-64.2c (1550nm, 80km) P/N: 130-4907-
900
• EXT TEMP (-5 TO 85C) XFP-EXT-UR, TRANSCEIVER, OPT, 1550NM XFP, OC192 LR-2/
10GBASE-80KM. (Release 1.0.0) (130-4907-900)
• XFP - C-Band Tunable P/N: 160-9002-900
• xSLM-48 SFPs:
• OPT-SR-1 Extended Temp OC3-OTU1 SR-1/I-1.1/ I-1.4/ I-1.16 (1310nm, 5 - 10 km) P/N: B-
700-1036-001
• OPT-IR-1 Extended Temp OC3-OTU1 IR-1/S-1.1 / S-1.4 / S-1.16 (1310nm, 20-30km) P/N:
B-730-0001-001
• OPT-LR-1 Extended Temp (155Mbps-2.67 Gbps) LR-1/L-1.1 / L-1.4 / L-1.16 (1550nm,
40km) P/N: 160-9011-900
• OPT-LR-2 Extended Temp 155Mbps-2.67 Gbps LR-2/L-2.1 / L-2.4 / L-2.16 (1550 nm, 80km)
P/N: 160-9012-900
• GigE 1000Base-SX SFP (850nm multimode fiber up to 500m) P/N: B-700-1016-001
• GigE 1000Base-LX SFP with LC Connector (1310nm for single-mode fiber, distances up to
5km) P/N: B-700-1016-002
• GigE 1000Base-ZX SFP (1550nm up to 80km) P/N: 162-0093-900
• GigE - ELT-BT, Electrical 10/100/1000Base-T SFP vB-730-0004-001
Fiber bender kits are available for all CFP/XFP/SFP transceivers. Fiber benders are not required and
are optional.
Transceivers are used as the optical front-end for the OSLM-3/TSLM-3, OSLM-12/TSLM-12/SSLM-
12 and OSLM-48/TSLM-48/SSLM-48 line modules, and perform the O/E and E/O conversions. Each
SFP/XFP/CFP is hot-swappable, in that it can be removed individually from the host card without
impacting traffic running on the adjacent interfaces plugged into the same host card.
The transceivers have the following features:
• Perform the optical/electrical and electrical/optical conversions
• Provides a serial interface for configuration and management
• Support digital diagnostics monitoring features to enable performance and fault monitoring
• Hot-swappable
• 30 pin 2-row electric connector (for SFP and XFP)
• 148 pin electrical connector (for CFP)
• Duplex LC connector receptacles
These transceivers are field replaceable and are listed in the Chapter 6, Ordering Guide, Table 6-
5 on Page 169. Fiber bender kits are available for all SFP/XFP transceivers.
Appendix B, Specifications and Standards, SFP/XFP/CFP Specifications on Page 213
provides additional information.
5400-11035
Switch Module
The switch module (SM) shown in Figure 5-22, provides the center stage of the three switch fabric
stages. All SMs are identical in function with regard to the switching architecture.
A fully populated 5430 Switch contains nine SMs. Eight are working SMs and one is standby
(reserved for redundancy), which provides 1:8 equipment protection. A fully populated 5410 Switch
contains four SMs. Three are working SMs and one is standby (reserved for redundancy), which
provides 1:3 equipment protection.
As part of the switch fabric resiliency in a fully populated 5410 Switch or 5430 Switch, if two or more
SMs fail or are removed, the system enters degraded mode. In degraded mode each LM has
insufficient bandwidth to the center stage and, therefore, can only support a smaller number of
optical ports.
The number of supportable lines (or ports) per LM depends on the number of primary switch
modules in the 5410 Switch and 5430 Switch, as well as the type of LM. The number of supportable
lines per LM is defined as the number of ports on each LM, counting from the lowest port number to
highest, that can allow the laser to be turned on and carry traffic. The information in Table 5-21 maps
the number of primary SMs to the number of ports supported per LM type. For example, if a 5430
Switch has seven primary SMs, it can support 44 ports (ports 1 through 44) of an OSLM-48 LM and
11 ports (1 through 11) of an OSLM-12 LM.
SM
134-5420-901
SM-5430
ACT MNT FLT
SM
134-5420-901
SM-5430
ACT MNT FLT
Because the switch module has no local processor, LED behavior on this module is a special case.
The red FAIL LED is lit by hardware during and after module reset until a controlling CTM can send
a Processor Communications Channel (PCC) command. Table 5-22 describes the operation of the
LEDs.
5410-11011
DCN1 ES1
CONSOLE LINK/ACT LINK/ACT CTM
ES3
CRITICAL
LINK/ACT 134-0183-900
MAJOR
MINOR
PWR A
PWR B
ACO
MNT
ACT
SEC
FLT
PRI
DEBUG LINK/ACT LINK/ACT USB
DCN2 ES2
CTM Status
Indicators 5410-11012
The control and timing module follows the basic module states, module boot sequence, and
redundant module states patterns as described in Table 5-1, Basic Module States Pattern on
Page 124.
Table 5-23. 5410 Switch CTM Controls and Indicators
Control/Indicator Description
Console Ports Provides CLI and debug access to the CTM.
DCN Ports Provides 10/100 autonegotiating Ethernet ports (DCN1 and DCN2)
Expansion Ports Provides expansion (IOADM) RJ connectors for 10/100/1000 TX support.
ES1, ES2, ES3 Integrated LEDs in the connectors indicate Link (Green) and Activity (Yellow),
consistent with Ethernet definitions.
CTM Status Indicators Provides CTM Module status (Table 5-24 on Page 154):
• ACT (Green): Card level indicator on health of module. For example,
card is powered, out of reset, and fully operational (that is, Active) to SW.
• MNT (Yellow): Indicates the card is in maintenance mode.
• FLT (Red): Card level indicator on health of modules. Indicates that card
is not fully operational to SW. Examples are, not programmed, unable to
access specific registers, detected HW failure.
Provides CTM redundancy status (Table 5-25 on Page 154):
• PRI - Indicates that the CTM is the primary (operational) CTM (Green).
• SEC - Indicates that the CTM is the secondary CTM (Green).
Table 5-26. 5410 Switch CTM Bay Status Summary LED States
Indicator Type or
Color Description
Label
Bay Alarm State Three large indicators that show the alarm status of the 5430
CRITICAL Red Switch:
MAJOR Red Critical 5430 Switch system condition
MINOR Yellow Major 5430 Switch system condition
Minor 5430 Switch system condition
PWR A. PWR B Green One Power A LED and one Power B LED that light green
Red when facility power is available on all PDU-A feeds and PDU-
B feeds, and that light red when power is unavailable on one
of the feeds.
5430-09087
CTM Module Status Indicators Console Port
The 5430 Switch control and timing module follows the basic module states, module boot sequence,
and redundant module states patterns as described in Table 5-1, Basic Module States Pattern on
Page 124. The CTM controls and indicators are described in Table 5-27.
Table 5-27. 5430 Switch CTM Controls and Indicators
Control/Indicator Description
Switch Module (SM) LEDs that provides system SM summary status (Table 5-28 on Page 155).
Summary Status • RED - Failed SM
Indicators • Yellow - One or more SM is administered non-operational (in a
maintenance mode).
• Green - OK
Expansion Ports Provides expansion (IOADM) RJ connectors for 10/100/1000 TX support.
Integrated LEDs in the connectors indicate Link (Green) and Activity (Yellow),
consistent with Ethernet definitions.
CTM Module Status Provides CTM Module status (Table 5-29 on Page 156):
Indicators •
ACT (Green): Card level indicator on health of module. For example,
card is powered, out of reset, and fully operational (that is, Active) to SW.
• MNT (Yellow): Indicates the card is in maintenance mode.
• FLT (Red): Card level indicator on health of modules. Indicates that card
is not fully operational to SW. Examples are, not programmed, unable to
access specific registers, detected HW failure.
Provides CTM redundancy status (Table 5-30 on Page 156):
• PRI - Indicates that the CTM is the primary (operational) CTM (Green).
• SEC - Indicates that the CTM is the secondary CTM (Green).
Console Port Provides debug and CLI access to the CTM (CON_PORT_2).
• Internal, mode-selectable Stratum 3E/G.812 Type III or G.813 SEC clock source featuring high
accuracy Oven Controlled Crystal Oscillator (OCXO) and digital phase-lock loop
• Support for two BITS T1 or E1 inputs
• Alarms detection on BITS inputs
• Jitter and wander filtering on internal clock reference
• Transmission of Synchronization Status Messages (SSM) by the BITS ESF, E1 Common
Channel Signaling (CCS), E1 Channel Associated Signaling (CAS)
• Collection of timing references from line modules and BITS inputs with selection of reference
under software control
• Qualification of incoming timing references
• Transmission of timing distribution references to line modules
• Fault detection and reporting capability
The 5400 Switch nodes operate in mixed mode timing where the timing reference is taken from a
configurable hierarchy of external and line references. Line timing only and external timing only can
be provisioned.
• Externally Timed mode (BITS or SSU), either DS1 or E1, that derives its clock reference from
oscillators that are locked to a Building Integrated Timing Supply (BITS) reference source.
• Line Timed mode derives clock from one of the OC-N or STM-N signals. The clock recovered
from this signal is fed into the timing modules, which provide timing to all outgoing OC-N or
STM-N signals.
Station Clocks
Station Clock interfaces consist of interface adaptors, DS1/E1 receivers and DS1/E1 transmitters.
The DS1/E1 receiver/transmitter accommodates frame format (DS1 D4/SF, DS1 ESF and E1) and
line coding (AMI, B8ZS and HDB3) to transform the synchronization signal between DS1/E1 and an
internal representation form (8 kHz). DS1/E1 receivers also monitor the input signal conditions like
LOS, AIS, LOF, and so on. In addition, DS1/E1 receiver/transmitter is responsible for extracting
Synchronization Status Messages (SSMs) from and send SSMs to station clock interfaces when a
configured frame format supports SSM capability.
Each I/O Panel contains two Station Clock Inputs (BITS In) and two Station Clock Outputs (BITS
Out).
• Station Clock Input refers to the port that connects the incoming timing signal from a BITS to
the I/O Panel.
• Each station clock input is distributed to both redundant CTM within the system and can be
used as a reference for the node clock within the TimingInput protection group.
• Station Clock Output refers to the port that connects the outgoing timing signal.
• Each CTM drives a single station clock output associated with the TimingOutput1 and
TimingOutput2 protection groups which provide hierarchy of up to four line references. A
station clock output may also be placed in a monitor mode and provide an output referenced to
the internal node clock.
The console RJ-45 pinout is EIA compatible and can be converted to DB-9 (male or female) or Cisco
pinout with separately purchased Ciena conversion cables listed in Table 6-5, Spares and
Accessories on Page 169.
Note: All unoccupied slots must have an appropriate blank installed over them to maintain
proper airflow and cooling for the 5430 Switch.
LM blanks are available in two types: one with a blank face plate (Figure 5-29), and an otherwise
identical “fiber-capable” model which includes 12 unconnected fiber ports on its faceplate (Figure 5-
30).
Overview
This chapter describes the available Ciena® 5400 Reconfigurable Switching System software and
field-replaceable parts:
• Software (below)
• Hardware Parts on Page 163
• Illustrations on Page 179
Note: This ordering guide should be used as a general guide only. For the latest ordering
information, contact the Ciena Account Team.
Note: A 5400 Switch can be custom-ordered to meet specific requirements. This chapter
describes a fully populated system and therefore, may describe parts that are not used
for all applications.
Software
The software capabilities of the 5400 Switch are available through Base Software Packages with an
optional suite of Intelligent Optical Services software offerings for mesh networks. Only one type of
base package can be ordered for a given system. A Right To Use (RTU) fee is assessed on a port
basis for the use of the software capabilities supported by the base packages, and the intelligent
optical service offerings which are charged for all ports in the system regardless of use.
A Software Installation Package (SIP) and a RTU certificate are shipped to customers who order a
5400 Switch software package. The SIP contains the 5400 Switch network element (NE) CD-ROM
(with operational software and SRD), 5400 Switch Node Manager CD-ROM (with embedded
management software and SRD). Table 6-1 lists the 5400 Switch infrastructure software packages.
Note: The final three digits of the part number indicate the release number. For the specific
part number needed, contact the Ciena Account Manager.
The user should contact the Ciena Account Team and refer to the current 5400 Switch software
release document.
Hardware Parts
Table 6-3 through Table 6-5 list the components (with model number) that can be ordered from
Ciena for initial installation, spares, and replacements for 5400 Switchs. Each main system
component is illustrated in a figure (Figure 6-2 on Page 181 through Figure 6-30 on Page 196).
The figure number is listed in the last column of each table.
Illustrations
• Figure 6-1, 5410 Switch on Page 180
• Figure 6-2, 5430 Switch on Page 181
• Figure 6-3, 5430 Switch PDU Jumper Kit on Page 182
• Figure 6-4, Rack Base Covers on Page 182
• Figure 6-5, Upper PDU Exhaust Air Deflector on Page 183
• Figure 6-6, Raised Floor Cable Bracket Assemblies on Page 183
• Figure 6-7, 5410 Switch CTM on Page 184
• Figure 6-8, 5430 Switch CTM on Page 184
• Figure 6-9, 5410 Switch SM on Page 185
• Figure 6-10, 5430 Switch SM on Page 185
• Figure 6-12, xSLM-3M on Page 186
• Figure 6-13, xSLM-12 on Page 187
• Figure 6-14, xSLM-48 on Page 187
• Figure 6-15, Line Module Blank on Page 188
• Figure 6-16, 20-inch LM-12 Fiber Capable Blank (faceplate shown) on Page 188
• Figure 6-17, SFPs/XFPs on Page 189
• Figure 6-18, CFP on Page 189
• Figure 6-19, SFP/XFP/CFP Blanks on Page 190
• Figure 6-20, 5410 Switch I/O Modules on Page 190
• Figure 6-21, 5430 Switch I/O Modules on Page 191
• Figure 6-22, Fan Unit on Page 191
• Figure 6-23, Spare PDU Fuse on Page 192
• Figure 6-24, PDU Display Module on Page 192
• Figure 6-26, Interbay Management Panel Kit (Shown With End Guard Plate) on Page 193
• Figure 6-27, Fiber Management End Plate on Page 194
• Figure 6-28, Fiber Management End Guard on Page 195
• Figure 6-30, Fiber Extraction Tool on Page 196
CFU Shelf
Shelf A
PDU Shelf
5410-11004
PDU Shelf
Assembly
Display Panel
Upper
Fan Shelf
Shelf A
Line Module/
CTM Shelf
Shelf C
Line Module/
CTM Shelf
Lower
Fan Shelf
5430-10005
Typical SFPs
5400-11035
Typical XFP
CFP
SFP XFP
5400-11028
5410-11001
Alarm Cable
E1 I/O Panel
BITS Connectors
T1 I/O Panel
BITS Connectors
5430-09039
ACO
LED
5430-10010
Figure 6-26. Interbay Management Panel Kit (Shown With End Guard Plate)
5430-09083
5430-09053
5430-11007
Compliance Information
Part 15 of the Federal Communications Commission (FCC):
Interference
This equipment has been tested and found to comply with the limits for a Class A digital device,
pursuant to part 15 of the of the FCC Rules. These limits are designed to provide reasonable
protection against harmful interference when the equipment is operated in a commercial
environment. This equipment generates, uses and can radiate unintentional radio frequency (RF)
energy and, if not installed and used in accordance with the instruction manual, may cause harmful
interference to radio communications. Operation of this equipment in a residential area is likely to
cause harmful interference in which case the user will be required to correct the interference at his
or her own expense.
CAUTION: If the device is changed or modified without permission from Ciena, the user
may void his or her authority to operate the equipment.
NOTE: This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device may not cause harmful interference, and (2) this
device must accept any interference received, including interference that may cause
undesired operation.
VCCI
It is further recommended that the owner of this equipment determine and ensure conformance with
any specific and applicable local regulations.
Toxic Emissions
Ciena equipment releases no toxic emissions.
• The metallic telecommunications interface should not leave the building premises unless
connected to telecommunication devices providing primary and secondary protection as
applicable.
• This product should only be operated from the type of power source indicated on the marking
label.
• The -48VDC input terminals are only provided for installations in Restricted Access Areas
locations.
• Do not use this product near water, for example in a wet basement.
• Do not install telecommunications wiring during a lightning storm.
• Do not touch un-insulated wiring or terminals carrying direct current or leave this wiring
exposed. Protect and tape wiring and terminals to avoid risk of fire, electric shock, and injury to
service personnel.
• Do not touch un-insulated wires or terminals unless the line has been disconnected at the
network interface.
Note: Removing or opening the module covers or opening the cassette covers on the
modules voids the customer warranty.
• To reduce the risk of electrical shock, do not disassemble this product. Trained personnel
should only perform service. Opening or removing covers and/or circuit boards may expose
you to hazardous energy or other risks. Incorrect re-assembly can cause electric shock or fire
when the unit is subsequently used.
• Ensure that there is no exposed wire when the input power cables are connected to the unit.
• Installation must include an independent frame ground drop to building ground.
• This equipment is to be installed only in Restricted Access Areas on business and customer
premises in accordance with Articles 110-16, 110-18 of the National Electrical Code, ANSI/
NFPA No. 70. Other installations exempt from the enforcement of the National Electrical Code
may be engineered according to the accepted practices of the local telecommunications utility.
• When connecting to the DC supply, a readily accessible disconnect device shall be
incorporated in the building installation wiring.
Voltage Precaution
DANGER: RISK OF PERSONAL INJURY
Negative 48 volts DC and 120 volts AC (if optional AC outlet is installed in bottom of rack)
is present in this equipment, and AC voltages may be present in some test equipment
used with this system. Contact with these voltages can cause personal injury. Take
VOLTAGE
appropriate safety precautions.
A voltage of -48 volts DC is present in the 5400 Switch as a power source for normal equipment
operation. Depending on the facility, AC voltages are likely present for test equipment, tools, lighting,
etc. Personnel should exercise safety precautions when connecting, measuring, and disconnecting
all voltage supply lines.
Observe the following precautions to avoid voltage shock:
• Never use both hands when working on or near a voltage source.
• Use the buddy system when working around voltage sources.
• Ensure that rescue and first aid equipment is available and accessible.
• Remove watches, rings, necklaces, and other conductive devices that might come in contact
with live voltages.
• Before activating circuits, ensure that other personnel are not in contact with voltage sources.
• Deactivate power whenever possible before performing maintenance on system components.
The 5400 Switch uses a dual -48 VDC power source (typically referred to as A-side and B-side).
Each source is protected by separate circuit breakers or fuses in the Power Distribution Unit and by
separate facility Battery Distribution Fuse Bay (BDFB) circuit breakers. Because the 5400 Switch
uses this redundant -48 VDC dual connection power configuration and is designed to operate fully
from only one -48 VDC power source, removing power from one source (either A-side or B-side
circuit breakers or fuses) does not remove power from the other source.
When removing power from the 5400 Switch, the user must ensure that power is removed
completely from both -48 VDC sources by turning off all circuit breakers/fuses.
Lift Precautions
DANGER: LIFT WARNING
The 5400 Switch is heavy. A 5410 Switch rack can weigh 430 lbs (195 Kg) when fully
populated and 121pounds (55 Kg) when empty. A 5430 Switch rack can weigh 1347.7
lbs (612 Kg) when fully populated and 725 pounds (329 Kg) when empty. Three people
are required to unpack and maneuver the rack.
LIFT
Be very careful when moving the 5430 Switch rack around the installation area. Until the
5430 Switch rack is installed and secured in place, the rack is very unstable.
When lifting or handling materials manually, use only methods that ensure personal safety and
protection of the material. Never attempt to lift objects that are too bulky or heavy to handle safely.
Whenever possible, push loads instead of pulling them. Pushing uses the strong leg muscles,
whereas pulling uses the back muscles, which can be easily strained.
Observe the following precautions when lifting an object:
• Before lifting the load, inspect the route over which the load will be carried for obstructions or
spills that could cause slipping or tripping.
• Inspect the load for sharp edges before lifting.
• Identify good hand holds on the load; these hand holds must be able to support the full weight
of the object.
• Avoid twisting or bending when lifting, and carry the load close to the body.
• When team-lifting, ensure that the load is raised at the same rate and that it rides level to
ensure that each person carries equal weight.
A damaging static electrical charge can be generated by the rubbing and sliding of materials against
each other. Different materials have different potentials of generating and holding a static electric
charge. Plastic materials similar to nylon and polyester are capable of generating and holding a
potentially large damaging static electricity charge. Materials similar to cotton do not typically have
the potential to generate and hold a charge. The buildup of static electricity can be of a sufficient
potential to damage electronic circuitry. When working on Ciena equipment or any interconnecting
electrical/optical cabling, always wear an approved personnel ground device.
Industry experience has shown that all devices containing integrated circuits can be damaged by
static electricity that builds up on work surfaces and personnel. The effect of ESD damage may be
immediate failure or it may manifest itself as a latent failure affecting the reliability of the equipment.
The static charges and discharges are produced by various charging effects of movement and
contact with other objects. Dry air allows greater static charges to accumulate on a body.
Observe the following precautions to avoid static charges and discharges:
• Assume that all modules contain solid state electronic components that can be damaged by
ESD.
• Handle all modules by the faceplate or latch and by the top and bottom outermost edges.
Never touch the components, conductors, or connector pins.
• When handling modules (that is, storing, installing, removing, and so forth) or when working on
the backplane, always wear a grounded wrist strap or wear a heel strap and stand on a
grounded, static-dissipating floor mat.
• Observe all warning labels on bags and cartons.
• If possible, do not remove modules from antistatic packaging until they are ready for use.
• If possible, open all module packaging at a static-safe work station using properly grounded
wrist straps and static-dissipating table mats.
• Always store and transport modules in static-safe packaging.
• Keep all static-generating material, such as food wrappers, plastics, and styrofoam containers,
away from all modules.
• When removing modules from an enclosure, immediately place them in static-safe packages.
• Whenever possible, maintain relative humidity above 20 percent.
Overview
This section provides System Specifications (below) and SFP/XFP/CFP Specifications on
Page 213.
System Specifications
The tables in this section list the physical characteristics, rack weight, module power requirements,
system power requirements and SFP/XFP/CFP requirements for the Ciena® 5400 Reconfigurable
Switching System:
• Table B-1, Physical Characteristics - 5410 Switch on Page 205
• Table B-2, Physical Characteristics - 5430 Switch on Page 206
• Table B-3, 5400 Switch Weights on Page 207
• Table B-4, 5410 Switch TDM Module Power Specifications on Page 208
• Table B-5, 5410 Switch TDM Partial Power Specifications on Page 208
• Table B-6, 5410 Switch Power Feed Slot Matrix on Page 209
• Table B-7, 5430 Switch TDM Module Power Specifications on Page 210
• Table B-8, 5430 Switch TDM Partial Power Specifications on Page 211
• Table B-9, 5430 Switch Power Feed Slot Matrix on Page 211
• Table B-10, 5400 Switch Module Specifications on Page 212
• SFP/XFP/CFP Specifications on Page 213
• 5400 Switch Standards on Page 224
•
(1) The values listed are the total worst-case current feeding the 5410 PDU feeds, with each feed in
the PDU is being protected by a 60 amp breaker (or fuse) within the PDU.
(2) Minimum Facility Fuse Rating for Feed Protection using 80% fuse de-rating factor. For example
at least a 60A fuse in the BDFB should be used to protect feeds to the 5410 PDU in the partial power
configuration.
(3) Minimum Facility Breaker Rating for Feed Protection using 90% breaker de-rating factor. Note
many breakers such as Hydro-Magnetic do not all require de-rating based upon recommendations
contain in the manufacturers' documentation. For example a 60A breaker in the BDCBB should be
used to protect Feeds 1 through 3 to the 5410 PDU.
5430 Switch Worst-Case TDM Power Specification @ 50°C (122°F) Max Ambient Temperature
LM (with Fan Power
Module SM CTM
CFP) Trays (w)
Watt 358 75 80 195
Quantity 30 9 2 10
Total 10740 675 160 1950 13525
(1) The values listed are the total worst-case current feeding the 5430 PDU feeds, with each feed in the PDU is
being protected by a 60 amp breaker (or fuse) within the PDU.
(2) Minimum Facility Fuse Rating for Feed Protection using 80% fuse de-rating factor. For example an 70A fuse in
the BDFB should be used to protect each feed to the 5430 PDU in the dual feed configuration.
(3) Minimum Facility Breaker Rating for Feed Protection using 90% breaker de-rating factor. Note many breakers
such as Hydro-Magnetic do not all require de-rating based upon recommendations contain in the manufacturers'
documentation. For example 60A breaker in the BDCBB should be used to protect Feeds 1 through 4 to the 5430
PDU in dual feed configuration.
Table B-9. 5430 Switch Power Feed Slot Matrix
Feed A1 A2 A3 A4 A5 A6 A7 A8
CTM slots A-CTM C-CTM
LM slots A1,A2, A4,A5, A8,A9, A12,A13, C1,C2, C5,C6, C9,C10, C13,C14,
A3 A6,A7 A10,A11 A14,A15 C3,C4 C7,C8 C11,C12 C15
SM slots B1,B2 B4 B6 B8 B3 B5 B7 B9
Fan tray CFUA-1 CFUA-3 CFUA-4 CFUA-5 CFUB-1 CFUB-2 CFUB-3 CFUB-4
assemblies CFUA-2 CFUB-5
Feed B1 B2 B3 B4 B5 B6 B7 B8
CTM slots A-CTM C-CTM
LM slots A1,A2, A4,A5, A8,A9, A12,A13, C1,C2, C5,C6, C9,C10, C13,C14,
A3 A6,A7 A10,A11 A14,A15 C3,C4 C7,C8 C11,C12 C15
SM slots B1,B2 B4 B6 B8 B3 B5 B7 B9
Fan tray CFUA-1 CFUA-3 CFUA-4 CFUA-5 CFUB-1 CFUB-2 CFUB-3 CFUB-4
assemblies CFUA-2 CFUB-5
Module Specifications
Table B-10 lists specifications for the circuit packs used with the 5400 Switch.
Table B-10. 5400 Switch Module Specifications
Typical
Module Height Width Depth Weight Ports
Power
5410 Switching 6.46 in 2.3 in 18.1 in 8.0 lbs None 55w (typ)
Module (SM) 164.08mm 58.42mm 459.74mm 3.7kg 75w (max)
5430 Switching 6.46 in 2.3 in 18.1 in 5.4 lbs None 55w (typ)
Module (SM) 164.08mm 58.42mm 459.74mm 2.5kg 75w (max)
5410 Control and 19.96 in 1.1 in 18.1 in 8.0 lbs 1 console 65w (typ)
Timing Module (CTM) 506.98mm 27.94mm 459.74mm 3.7kg 4 expansion 80w (max)
5430 Control and 19.96 in 1.1 in 18.1 in 7.4 lbs 1 console 65w (typ)
Timing Module (CTM) 506.98mm 27.94mm 459.74mm 3.4kg 4 expansion 80w (max)
Fan Tray 5.050 in 12.800 in 22.310 in 35.5 lbs None 35w (typ)
128.27mm 325.12mm 566.67mm 16.1kg 195w (max)
PDU 5.050 in 12.800 in 22.310 in 35.5 lbs None N/A
128.27mm 325.12mm 566.67mm 16.1kg
OSLM-3 19.96 in 1.35 in 18.1 in 14.4lb 3 CFP ports 180w (typ)
506.98mm 34.29mm 459.74mm 6.5kg 250 (max)
TSLM-3 19.96 in 1.35 in 18.1 in 14.4lb 3 CFP ports 230w (typ)
506.98mm 34.29mm 459.74mm 6.5kg 310 (max)
OSLM-3M 19.96 in 1.35 in 18.1 in 14.2lb 3 CFP ports 200w (typ)
506.98mm 34.29mm 459.74mm 6.4kg 275 (max)
OSLM-12 19.96 in 1.35 in 18.1 in 14.8 lb 12 XFP ports 200w (typ)
506.98mm 34.29mm 459.74mm 6.6kg 260 (max)
SSLM-12 19.96 in 1.35 in 18.1 in 14.8 lb 12 XFP ports 220w (typ)
506.98mm 34.29mm 459.74mm 6.6kg 310 (max)
TSLM-12 19.96 in 1.35 in 18.1 in 14.8 lb 12 XFP ports 220w (typ)
506.98mm 34.29mm 459.74mm 6.6kg 310 (max)
OSLM-48 19.96 in 1.35 in 18.1 in 14.6 lb 48 SFP ports 170w (typ)
506.98mm 34.29mm 459.74mm 6.7kg 225 (max)
SSLM-48 19.96 in 1.35 in 18.1 in 14.6 lb 48 SFP ports 170w (typ)
506.98mm 34.29mm 459.74mm 6.7kg 225 (max)
TSLM-48 19.96 in 1.35 in 18.1 in 14.6 lb 48 SFP ports 190w (typ)
506.98mm 34.29mm 459.74mm 6.7kg 275 (max)
CFPs
40G CFP Parallel 40GBASE-LR4+/G.695 (4 Wavelength WDM 10km, 40GbE, OTU3, 8w (typ)
OC768/STM256) (PN NTTA12BAE6) 12w (max)
40G CFP Serial G.693 (1550nm Serial 2km, 40GbE, OC768/STM256, OTU3) (PN 13w (typ)
NTTA13EEE6) 16w (max)
SFP/XFP/CFP Specifications
The tables in this section list the 5430 Switch transceiver specifications:
• Table B-11, B-700-1036-00x OPT-SR1 Transceiver Specifications
• Table B-12, B-730-0001-00x OPT-IR1 Transceiver Specifications on Page 215
• Table B-13, 160-9011-90x OPT-LR-1 Transceiver Specifications on Page 215
• Table B-14, 160-9012-90x OPT-LR-2 Multirate Transceiver Specifications on Page 216
• Table B-15, B-700-1016-001 GigE 1000Base-SX Transceiver Specifications on Page 217
• Table B-16, B-700-1016-002 GigE 1000Base-LX Transceiver Specifications on Page 217
Parameter Value
Transmission mode 10/100/1000 BaseT
Connector type RJ-45
Required cable Shielded Cat5e
Transmission range 100m
Two types of cables are used to connect the Ethernet devices together, straight-through and
crossed-over cables. Straight-through wiring occurs when the user has wired both ends
identically, so that the signal passes straight through. Crossover wiring has a reverse order of
wiring.
A 10/100Base-T Ethernet cable utilizes two pairs of signals whereas a 1000Base-T Ethernet
cable contains four pairs of signals. For Electrical 1000Base-T Ethernet (EE1) ports and
electrical 10/100 Base-T Ethernet (EEX) ports, the cable-type parameter may be
provisioned to straight, cross over, or auto-detect. The default value for the parameter is
auto-detect, which enables the automatic detection and correction of incorrect cabling with
respect to crossed-over or straight-through cables.
The cable-type parameter sets the 10/100/1000Base-T ports to swap the signal pairs to
accommodate the specified type of RJ-45 cable that is connected to the port.
• CSA C22.2 No. 60950-1-07, Second Edition, dated March 27, 2007: Information Technology
Equipment - Safety - Part 1: General Requirements
• EN 60950-1: 2006: Information Technology Equipment - Safety - Part 1: General Requirements
• IEC 60950-1, Second Edition, 2005-12: Information Technology Equipment - Safety - Part 1:
General Requirements
Hardware Standards
• Telcordia GR-1089, Electromagnetic Compatibility and Electrical Safety - Generic Criteria for
Network Telecommunications Equipment, Issue 4, October 2002.
• 2.1 ESD, 2.2 EFT, 3.2 Emissions, 3.3 Immunity, 4.0 Lightning and AC Power Fault, 7.0
Electrical Safety, 9.0 Bonding and Grounding and 10.0 Criteria of DC power Port of Telecom
Load Equipment
• Telcordia GR-63-CORE, Network Equipment-Building System (NEBS) Requirements: Physical
Protection, Issue 3.
• 2.0 Spatial Requirements, 4.1 Temperature, Humidity and Altitude, 4.2 Fire Resistance, 4.3
Equipment Handling Criteria, 4.4 Earthquake, Office Vibration and Transportation Vibration,
4.5 Airborne Contaminants, 4.6 Acoustic Noise, 4.7 Illumination
• UL/CSA-60950-1, Product Safety Requirements for ITE Equipment (US, Canada).
• IEC 60950, Safety of Information Technology Equipment (All Countries).
• EN 60950, Safety of Information Technology Equipment (EU).
• FCC Title 47, Part 15, Subpart B, Class A, Limits for Radiated Emissions (US).
• ICES-003 Class A, Limits for Radiated Emissions (Canada)
• ETSI EN 300 386 Equipment Engineering; Public Telecommunications Equipment EMC
Requirements
Overview
This section contains definitions for acronyms, abbreviations, and technical terms used in this
manual. These have been selected so as to not conflict with definitions originated by Telcordia, ISO,
ITU-T or other standards bodies. Some acronyms have been defined in ways that are specific to the
5400 Reconfigurable Switching System however, these definitions do not conflict with standard
definitions. In some cases, informal definitions have been included with the more formal, technical
definition. This is intended to explain the term in relation to 5400 Switch operation.
Numerics
1+1 The APS or MSP line protection scheme that uses one designated protect line
for one working line. If the working line fails, all associated connections are
automatically switched over to the protect line (both lines must conform to a
specific configuration within the optical module).
1:n The APS or MSP line protection scheme that uses one designated protect line
for one or more working lines. If a working line fails, all associated connections
are automatically switched over to the protect line, if it is available. See channel
#.
10G 10 Gigabit
40G 40 Gigabit
A
ACO Alarm Cutoff
Abbreviation/ Definition
Acronym
B
BBE Background Block Error
Abbreviation/ Definition
Acronym
C
CAC Call Admission Control
CO Central Office
CP Connection Provisioner
Abbreviation/ Definition
Acronym
D
DB Database
dB Decibel
DC Direct Current
E
EBS Excess Burst Size
Abbreviation/ Definition
Acronym
EO Equipment Order
ER Equipment Request
ES Errored Seconds
ES Errored Seconds
F
FC Failure Code; Failure Count
Abbreviation/ Definition
Acronym
G
GA General Availability
H
HD-SDI High Definition - Serial Digital Interface
I
I/O Input/Output
ID Identifier
IE Information Element
Abbreviation/ Definition
Acronym
IP Internet Protocol
IR Intermediate Reach
IS In Service
J
J0 J-zero section trace performance monitoring
K
K1 SONET or SDH overhead byte for 1+1 switching, VLSR, or MSP/MS-SPRing
(Refer to K2.)
K2 Used with K1–SONET or SDH overhead byte for 1+1 switching, VLSR, or
MSP/MS-SPRing
L
LAN Local Area Network
Abbreviation/ Definition
Acronym
LM Line Module
LR Long Reach
LT Low Threshold
M
MAC Media Access Control
MGMT Management
Abbreviation/ Definition
Acronym
MJ Major
MN Minor
MR Medium Reach
MS Multiplex Section
MUX Multiplexer
N
NA Not applicable or not available
NC Normally Closed
Abbreviation/ Definition
Acronym
O
OAM&P Operations, Administration, Maintenance, and Provisioning
OC-12 Optical Carrier level 12, SONET bit rate = 622.08 Mbps
OC-48 Optical Carrier level 48, SONET bit rate = 2.4 Gbps
OC-192 Optical Carrier level 192, SONET bit rate = 9.6 (10) Gbps
Abbreviation/ Definition
Acronym
OM Optical Module
OOS Out-of-Service
OS Operating System
Abbreviation/ Definition
Acronym
P
P2P Point-to-Point (part of OSPF routing protocol)
PC Personal Computer
PJ Pointer Justification
PM Performance Monitoring
PO Purchase Order
POP Point-of-Presence
PWR Power
Abbreviation/ Definition
Acronym
Q
Q3 Telecommunications Management Network (TMN) Interface
R
RAM Random Access Memory
RCVR Receiver
RF Radio Frequency
RS Regenerator Section
RTN Return
RTRV Retrieve
Abbreviation/ Definition
Acronym
RU Rack Unit
RX or Rx Receive
S
SA Service Affecting
SD Signal Degradation
SEF Severely Errored Framing seconds at the section (-S) layer only; SNM - Source
Explicit Forwarding
SER Serializer/Deserializer
Abbreviation/ Definition
Acronym
SM Switch Module
SR Short Reach
STM-640 Synchronous Transfer Module 640 (SDH bit rate 100 Gb/s)
Abbreviation/ Definition
Acronym
T
T1 Service or line operating at the DS-1 rate (1.544 Mb/s)
Abbreviation/ Definition
Acronym
TM Timing Module
TP Termination Point
TU Transmission Unit
TX or Tx Transmit
U
UAS Unavailable Seconds or Universally Accepted
UEQ Unequipped
UL Underwriter's Laboratory
Abbreviation/ Definition
Acronym
V
VAC Volts Alternating Current
VC Virtual Container
VT Virtual Tributary
W
WAN Wide Area Network
X-Z
X.25 ITU-T standard governing the operation of packet-switching networks
XCVR Transceiver
Abbreviation/ Definition
Acronym
Terminology
This section contains definitions of terms used in Ciena Corporation documentation. These have been
selected so as to not conflict with definitions originated by Telcordia, ISO, ITU-T or other standards bodies.
Some terms have been defined in ways specific to the 5400 Switch, but do not conflict with standard
definitions. In some cases, informal definitions have been included with the more formal, technical definition.
This is intended to explain the term in relation to 5430 operation.
Term Definition
A
Administrative weight or admin The numerical value or cost assigned to an Optical Signaling and
wt Routing Protocol (OSRP) link for routing purposes; the higher the
administrative weight, the higher the cost of routing over that link.
The alarm severity that the user can set to a fault, e.g., critical, major,
etc.
Asynchronous Transmission A method of data transmission that allows data bits to be sent at
irregular intervals by preceding each with a start bit and following it
with a stop bit.
Attenuation (1) Limited Operation. The condition in a fiber optic link when
operation is limited by the power of the received signal (rather than
by bandwidth or by distortion).
(2) The decrease in magnitude of power of a signal in transmission
between points. A term used for expressing the total losses on an
optical fiber consisting of the ratio of light output to light input.
Attenuation is usually measured in decibels per kilometer (db/km) at
a specific wavelength. Typical multi-mode wavelengths are 850 and
1300 nanometers (nm); single mode, at 1300 and 1500 nm. NOTE:
When specifying attenuation, it is important to note if it is nominal or
average room temperature value, or maximum over operating range.
Automatic Protection This protocol involves using a designated protect line alongside one
Switching. (APS) or more working lines. If one of the working lines fails, all
connections are automatically switched over to the protect line. The
5430 by Ciena supports 1:N (one for N) APS, meaning one protect
line protects some number of N working lines. It also supports 1+1
(one plus one) APS, where a single protect line protects a single
working line, and the lines’ ports and optical modules need to be
configured in a certain way.
Term Definition
B
Bandwidth (1) Measure of the information capacity of a transmission channel.
(2) The difference between the highest and lowest frequencies of a
band that can be passed by a transmission medium without undue
distortion (e.g., the AM band - 535 to 1705 kilohertz). (3) Information
carrying capacity of a communication channel. Analog bandwidth is
the range of signal frequencies that can be transmitted by a
communication channel or network. (4) A term used to indicate the
amount of transmission or processing capacity possessed by a
system or a specific location in a system (usually a network system).
Battery Distribution Fuse Bay Distribution point for DC power at central office.
(BFDB)
Bit Error Rate (BER) (1) Percentage of bits in a transmittal received in error. (2) The
number of coding violations detected in a unit of time, usually one
second. (3) Specifies expected frequency of errors; compares the
ratio of incorrectly transmitted bits to correctly transmitted bits.
Bundle Many individual fibers contained within a single jacket or buffer tube.
Also, a group of buffered fibers distinguished in some fashion from
another group in the same cable.
C
Capacity The information carrying ability of a telecommunications facility.
What the facility is determines the measurement (e.g., line capacity)
in bits per second, or switch capacity might be measured in the
number of calls it can switch in one hour, or the maximum number of
calls it can keep in conversation simultaneously).
Term Definition
channel # The number assigned to each line in an APS working group. The
protect line is always channel 0; the other lines are given a number
from 1 to 16. The lower the channel number, the higher the priority.
Thus, if two working lines fail, both having priority HIGH and one has
channel number 2 and the other has channel number 4, the one with
channel number 2 uses the protect line. Identifies a particular
wavelength of light in a Wave Division Multiplex (DWM) system
Cleared When an alarm is cleared it may indicate that the problem condition
causing the alarm has been addressed and/or resolved by a
technician.
Coating A material put on a fiber during the drawing process to protect it from
the environment.
Coaxial Cable A type of cable used for high frequency transmission. It consists of a
central conductor surrounded by insulation, which in turn, is
surrounded by a circular outer conductor.
Concatenation When multiple STS-1 frames are linked together to form an envelope
capable of carrying super payloads.
Connector A mechanical device used to align and join two fibers together to
attach and decouple fiber to a transmitter, receiver, or another fiber.
(Common connectors include the FC, FCPC, Biconic, ST Connector,
SC, D4, SMA, 905, or 906.)
Connection Termination Point A transport entity that terminates a link connection and support an
(CTP) optical carrier (OC-n). CTPs typically do not process or monitor the
characteristic information (with the exception of Synchronous Optical
Network (SONET) Path CTPs and intermediate monitoring).
5430 Designer Software Tool Software tool used to plan network configuration, analyze traffic, and
determine equipment requirements.
Term Definition
cross connect A type of connection whose route is not flexible; a series of OSRP
links from start node to end node must be specified and specific
Synchronous Optical Network (SONET) or Synchronous Digital
Hierarchy (SDH) lines, when chosen, cannot be changed. It
corresponds to a sequence of permanent cross connections set up
on the network. If a line fails, there is no mesh protection; the
connection waits until the line comes back up.
D
Data Communications In a data station, the equipment that provides the signal conversion
Equipment (DCE) and coding between the Data Terminal Equipment (DTE) and the
line. The DCE may be separate equipment or an integral part of the
DTE or of intermediate equipment. A DCE may perform other
functions usually performed at the network end of the line.
Data Rate The number of bits of information transmitted per second as in a data
transmission link. Typically expressed as megabits per second (mb/
s)
Data Terminating Equipment That part of a data station that serves as a data source (originates
(DTE) data for transmission), a data sink (accepts transmitted data), or
both.
Delay: a-b The delay from the local node to the remote node.
Delay: b-a The delay from the remote node to the local node.
Designated Transit List (DTL) Designated Transit List. A sequence of OSRP links or nodes that
define a path between a connection's start and end nodes. DTLs
may either be exclusive (i.e., the DTLs must be used or the
connection cannot be routed) or preferred (i.e., the DTLs are
checked first; only if no DTL is available will other routes be
considered). A Subnetwork Connection (SNC) may have zero or
more DTLs; a cross connect’s path must be specified using a single
exclusive DTL.
Term Definition
Diverse Route An alternate route that mesh protection uses if the current route fails
and usually implies a different physical fiber route for fiber cut
protection.
E
Electromagnetic Interference Any electrical or electromagnetic interference that causes
undesirable responses, or degradation.
Electrostatic Discharge (ESD) The discharge of high voltage caused by static charging.
Electrostatic Discharge Strap ESD-guard wrist band worn by technicians that terminates at the
5430 ESD - ground jack to provide a fixed resistance to ground.
Protects the technician and system from electrostatic discharge.
F-H
Fiber A physical transmission facility containing one or more spans
(connecting possibly multiple Ciena 5430 pairs)
Fiber Optic Cable A transmission medium that uses (is composed of) glass or plastic
fibers, rather than copper wire, to transport data or voice signals. The
signal is imposed on the fiber via pulses (modulation) of light from a
laser or a Light-emitting Diode (LED). Because of its high bandwidth
and lack of susceptibility to interference, fiber optic cable is used in
long haul or noisy applications.
Fiber Optic Jumper A single or multiple fiber that is used to connect one unit of
equipment to another within an equipment frame.
Fiber Optics A method for the transmission of information (i.e., sound, pictures,
data). Light is modulated and transmitted over high purity, hair-thin
fibers of glass. The bandwidth capacity of fiber optic cable is much
greater than that of conventional cable or copper wire.
Term Definition
Fuse A unit that detects current flow and opens a circuit at a preset current
flow. These are used for the protection of a circuit.
I
Infrastructure The basic facilities, services and installations needed for the
functioning of a community or society such as transportation and
communications systems.
Interexchange Carrier (1) Any individual, partnership, association, joint stock company
trust, governmental entity, or corporation engaged for hire in
interstate or foreign communication by wire or radio between two or
more exchanges. (2) A long-distance telephone company offering
circuit-switched, leased line or packet-switched service or some
combination thereof. Interexchange Common Carrier (See
Interexchange Carrier).
J-K
Jitter Timing jitter is defined as short-term variations of the significant
instances of a digital signal from their ideal positions in time.
Jumper Fiber optic cable that has connectors installed on both ends.
L
Line Defines the internal SONET fiber route between line terminating
equipment. Lines consist of shorter fiber sections. The outside of
edges of connected lines define the path.
Term Definition
Line Terminating Equipment Network elements that originate and/or terminate line (OC-N)
(LTE) signals. LTEs access, modify, originate, and/or terminate transport
overhead.
Logical connection A data path between two SONET end systems traversing one or
more 5430s that satisfy customer requirements for size, termination,
and protection strategies.
Long Reach (LR) Category for SONET and SDH transmitters and receivers. Typical
transmission distance up to 80 km.
M
Medium Reach (MR) Category for SONET and SDH transmitters and receivers. Typical
transmission distance of 2-10 km.
Multiplexer (MUX) (1) Equipment that enables several data streams to be sent over a
single physical line. It is also a function by which one connection
from an ISO layer is used to support more than one connection to
the next higher layer. (2) A device for combining several channels to
be carried by one line or fiber.
N
Name A customer-supplied identifier. Each network element is created with
a default name; these default names may be changed by an MPS
user.
Term Definition
Network Element (NE) (1) Any device that is part of a communications path and serves one
or more of the section, line, or path terminating functions.(2) Defined
in the TMN document (M.3010) as “telecommunication equipment
and support equipment or any item or groups of items… that perform
NEFs” (Network Element Functions). In the context of this manual, a
5430.
Network Element Function An entity “which communicates with the TMN for the purpose of
(NEF) being monitored and/or controlled. The NEF provides the
telecommunications and support functions which are required by the
telecommunications network being managed.”
Nodal Control Processor (NCP) Circuit pack that provides craft interface and monitoring of the
system. It is the agent within a node that responds to requests from
outside software and sends traps to call attention to changes in
condition. This circuit pack is similar in function to the CM.
Node A 5430.
O
Out-of-Service - Autonomous Cause of incapability is unsolicited event occurrence on the NE.
(OOS-AU)
Out-of-Service - Management- Operationally capable of performing only part of its functions, and at
and-Abnormal (OOS-MAANR) the same time is intentionally suspended from performing all
functions.
Open Systems Interconnection Referring to the OSI reference model, a logical structure for network
(OSI) operations standardized by the International Standards Organization
(ISO). The OSI model organizes the communications process into
seven different categories and places the categories in a layered
sequence based on their relationship to other users. Layers 7
through 4 deal with end to end communications between the
message source and the message destination, while layers 3
through 1 deal with network access.
Term Definition
Optical Service Channel (OSC) An optical maintenance channel linking the OTS Repeaters (OLAs)
to each other and to the OTS End Terminals. It is multiplexed onto
the same fiber as the OC-48 channels. All telemetry, data, and voice
traffic originating and/or terminating at OTS Repeater sites are
routed over the OSC.
Optical Signaling and Routing Ciena proprietary link state protocol for intelligent routing. An OSRP
Protocol (OSRP) link must be established between two nodes to allow them to
communicate with one another. OSRP handles routing, mesh
protection, and regrooming.
Optical Time Domain (1) A device for characterizing a fiber wherein an optical pulse is
Reflectometer (OTDR) transmitted through the fiber and the resulting backscatter and
reflections to the input are measured as a function of time. Useful in
estimating the attenuation coefficient as a function of distance and
identifying defects and other localized losses. (2) A test instrument,
working on the principal of continuous energy backscatter, that
provides a complete characterization of fiber loss along its length.
OSRP Link A logical connection between two 5430 nodes for dynamic routing
purposes; one OSRP link contains one or more lines.
OSRP Path A logical connection between two SONET end systems traversing
one or more 5430s; also called a logical connection.
OTS Branching Cross connect A type of cross connect site that has more than two OTS End
Site Terminals.
OTS Cross connect Site A node with two or more OTS end terminals, where some OC-48
signals are dropped/added (i.e., terminated in an equipment that
provides optical-to-electrical and electrical-to-optical conversion);
and some are express, passing through the node between two OTS
End Terminals.
OTS End Terminal OTS terminal equipment that performs the following functions (for
both working and protection paths): Optical Multiplexing (MUX) and
Optical Demultiplexing (DMUX), Optical Amplification, Optical signal
conversion (i.e., through a WDM), and the insertion/extraction of the
Optical Service Channel.
OTS Regenerator OTS regeneration equipment that passes through and reconstructs
all of the optical channels.
OTS Regenerator Site A node with two OTS End Terminals or one OTS Regenerator,
through which all of the OC-48s are passed.
Term Definition
OTS Repeater The bidirectional OTS repeater equipment that consists of four
optical amplifiers (two for the working paths and the other two for the
protection paths) and the corresponding OSC Equipment.
OTS Section The portion of the OTS Span between two adjacent OTS Repeaters,
between an OTS End Terminal and an OTS Repeater, or between
adjacent OTS End Terminals (for short spans where OTS Repeaters
are not needed).
OTS Span The fiber transmission facility between two OTS End Terminals.
OTS Subnetwork A collection of OTS NEs from the same supplier capable of
communicating with each other as a managed entity. An OTS sub-
network may have more than one OTS from the same supplier.
Outside Plant (OSP) Loss The optical power loss (in dB) due to fiber, splices, and connectors.
P-Q
Pass Through A VLSR or MS-SPRing span state where the working line is live, but
a ring switch is using the protect line.
Payload The customer portion of the cargo transported across a span. The
payload consists of multiple WDM channels.
Payload Pointer H, H2 and H3 bytes of the STS Line overhead that indicate the data's
position within the payload envelope, define the size of the virtual
tributary within the payload envelope, and help perform frequency
corrections.
Term Definition
Performance Monitoring (PM) Measures the quality of service and identifies degrading or
marginally operating systems (before an alarm would be generated).
Physical Termination Point Physical address which identifies a termination or cross over
connection point for switching optical transport signals. Refers to the
physical location of an Optical Module within a Ciena 5430s.
Permanent Virtual Circuit A type of connection whose route is not flexible; series of OSRP links
(PVC) from start node to end node must be specified, and specific SONET
or SDH lines, when chosen, cannot be changed. It corresponds to a
sequence or permanent cross connections set up on the network. If
a line fails, there is no mesh protection; the connection simply waits
until the line comes back up.
Protect Line A line designated to transport a working line’s traffic whenever the
working line fails. Protect lines can also be used to carry low priority
connections (extra traffic) when not in a failure scenario.
Protection Line A line allocated to transport the working traffic during a switch event.
Protection lines can also be used to carry low priority connections
(extra traffic).
Protection Bundle A logical grouping of fibers within a cable that are bundled together
for purposes of calculating the divergent route. Fibers within a
bundle are not used for both working and protect lines.
Term Definition
Protection requirements for a ANY - May use any protected line or unprotected lines. May also use
connection a protect line if it is a low priority preemptable connection.
NONE - May use any unprotected line. May use any protect line
provided that it is a low priority preemptable connection.
APS/MSP - May use any linear APS-protected or MSP-protected
working line (not an option for low priority preemptable connections).
SmartRingVLSRTM/MS-SPRing - May use any available
SmartRingVLSRTM-protected or MS-SPRing-protected working line
(not an option for low-priority, preemptable connections).
Protection Status Current protection state for a line. Can be protecting or protected if
currently involved in an APS, MSP, SmartRingVLSRTM, or MS-
SPRing switchover; otherwise, it will be NA.
R
Real Time Processor The processor in the Timing Module.
Receiver Sensitivity The optical power required by a receiver for low error signal
transmission. In the case of digital signal transmission, the mean
optical power is usually quoted in Watts or dBm (deciBels referenced
to 1 milliwatt).
Regroom The process of finding a better route for a connection and shifting the
home route of the connection to that better route.
Ring Switched A VLSR or MS-SPRing state where the ring’s protect lines are
handling the traffic for a failed span.
Term Definition
S
Section Defines individual physical portions of a SONET fiber route. The
outside edges of connected sections define a line. Section
Terminating equipment is replaced by line terminating equipment at
both outside ends.
Section Overhead First three rows of an STS1 overhead frame carry synchronization
and section overhead information
Service Affecting (SRVEFF) The code indicating affect of alarm on service (SA/NSA); TL-1 - the
effect of a given alarm condition on service (i.e., traffic), as either
service affecting (SA) or non-service affecting (NSA).
Severely Errored Second A second in which a signal failure occurs or more than a preset
amount of coding violations occur (depending on the type of signal).
Short Reach (SR) Category for SONET transmitters and receivers. Typical
transmission distance of 2 km or less.
Signal-to-Noise Ratio (SNR) The ratio of signal power to noise power. Measured in dB.
Term Definition
Simple Network Management A set of protocols for managing complex networks. SNMP works by
Protocol (SNMP) sending messages, called Protocol Data Units, to different parts of a
network. SNMP-compliant devices, called agents, store data about
themselves in Management Information Bases (MIBs) and return this
data to the SNMP requesters.
Simulation Artificially routing, failover, and recovery of a network using the MPS.
Single Mode Used to describe optical fiber that allows only one mode of light
signal transmission.
Single Mode Fiber Also called monomode. Single mode fiber has a narrow core. Such
fiber has a higher bandwidth than mulitmode fiber, but requires a
light source with a narrow spectral width (e.g., laser).
Soft Permanent Virtual Circuit A type of connection whose route is flexible; it may be regroomed, it
(SPVC) may be mesh-protected to another route if a line fails. Specific paths
may also be forced or encouraged by the use of Designated Transit
Lists. See SNC - Subnetwork connection.
Subnetwork Connection (SNC) A type of connection whose route is dynamically determined; it may
be regroomed, it may be mesh-protected to another route if a line
fails, and so forth. Specific paths may also be forced or encouraged
by the use of DTLs.
Switch A device that filters, forwards, and directs frames or circuits based on
a destination address.
Switching Network Manager Manages individual and networks of optical switching elements.
Synchronous Digital Hierarchy ITU-TSS international standard for transmission over optical fiber.
(SDH)
Synchronous Optical Network (1) A set of standards for transmitting digital information over optical
(SONET) networks. Synchronous indicates that all pieces of the SONET signal
can be tied to a single clock. (2) A ITU-T standard for synchronous
transmission up to multi-gigabit speeds. (3) A standard for fiber
optics.
Term Definition
Synchronous Transport Signal (1) SONET standard for transmission over OC-1 optical fiber at
(STS, STS-n) 51.84 Mb/s. (2) A SONET frame including overhead and payload
capacity. The basic SONET frame is the STS-1. STS-1s can be
multiplexed or concatenated with no additional overhead. STS-n
denotes a multiple of 51.84 Mb/s.
T
Telcordia Telcordia Technologies, Inc., formerly Bellcore (BELL
Communications Research, Inc.)
Time Division Multiplexing A technique where information from multiple channels may be
(TDM) allocated bandwidth on a single wire or fiber based on time slot
assignment.
time slot A single channel on a SONET line. The size of a line specifies the
number of time slots (3 for OC-3, 12 for OC-12, 48 for OC-48, or 192
for OC-192).
Transceiver An electronic device that has both transmit and receive capabilities.
Transmission Control Protocol/ A set of protocols developed to link dissimilar computers across
Internet Protocol (TCP/IP) many kinds of networks.
Transparent Logical A line not terminated at the 5430; SONET line overhead is
Connection (TLC) monitored but otherwise forwarded unaffected (TLCs are transparent
to the SONET end system).
Transport Layer OSI layer that is responsible for reliable end-to-end data transfer
between end systems.
U
Unavailable Seconds (UAS) The count of seconds in which a signal is declared failed or in which
10 consecutively severely errored seconds (SES) occurred, until the
time when 10 consecutively non-SES occur.
Term Definition
Unit Interval (UI) The shortest time interval between consecutive significant instances
in isochronous networks. The duration of significant time intervals
are whole multiples of the UI.
Universally Accepted (UAS) Special module type that is universally accepted, but not yet
configured.
Ultra Physical Contact (UPC) Highest quality, lowest cost optical connector standard.
V
Very High Bit Rate Digital This is a scheme to boost transmission speeds up to 52 Mbps for
Subscriber Line (VDSL) very short distances (up to 1000 feet) on copper wire, or for longer
distances in fiber-optic networks.
Virtual Container (VC) The SDH equivalent to the SONET Virtual Tributary (VT). Virtual
Containers are made up of Tributary Units (TU). See VT1.5, VT2,
VT6
Virtual Tributary (VT) A SONET subset, or unit of transport that can be combined or
concatenated for transmission on the network. Designed for
transport and switching of sub-DS3 level payloads on higher
bandwidth SONET channels.
Virtual Line Switched Ring A four-fiber ring configuration consisting of 5430s. This is Ciena’s
(VLSR) version of rings. Each span in the ring has one or more working lines
and the same configuration of protect lines. The rings perform ring
switches and span switches.
VT Superframe Four consecutive VT's from four consecutive STS-1 frames that
contain data originating from a single user.
Term Definition
W-Z
Wavelength The length of one complete wave of an alternating or vibrating
phenomenon, generally measured from crest to crest or from trough
to trough of successive waves. The distance between two crests of
an electromagnetic waveform.
Wavelength Division (1) A technique in fiber-optic transmission for using multiple light
Multiplexing (WDM) wavelengths (colors) to send data over the same medium. (2) Two or
more colors of light on one fiber. (3) Simultaneous transmission of
several signals in a optical waveguide at differing wavelengths.
WDM Transmitter A device that receives an optical signal, converts the signal to an
electrical signal that is amplified/retimed/reshaped, then converts the
electrical signal back to an optical signal.
Working Line A line carrying working traffic when there are no switch events,
during normal conditions, which is protected by protect lines in an
APS or SmartRingVLSRTM configuration.
Index
U
upgrade 73
UPSR/SNCP
automatic switching criteria 91
capabilities 89
requirements 90
simple hubbing application
add/drop port cross connect 92
APS 1+1 cross connect 94
different ring cross connect 94
overview 92
same ring cross connect 93
user access privileges 67
V
vacant module slots 159
version management 73
VLSR 48
capabilities 49
W
weight
CoreDirector 206
weight, 5410 Switch 201
weight, 5430 Switch 201, 206