Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Advanced Opensees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction
Advanced Opensees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction
Advanced Opensees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction
Ebook406 pages2 hours

Advanced Opensees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The Quincy Memorial Bridge is a truss bridge over the Mississippi River in Quincy, Illinois. It brings eastbound U.S. Highway 24 into the city of Quincy from Missouri. It was built in 1930, initially as a toll bridge, and remains structurally sound. Building of the bridge began in 1928 by the Kelly-Atkinson Company. It was completed in 1930, with the first car crossing the bridge on May 19th of that year on an official inspection trip. In this book, finite elements simulations, interpretation of programming blocks, and explanations of seismic concepts for the Quincy Bridge under near-fault earthquakes and the destructive phenomenon of liquefaction have been done using Opensees software.
LanguageEnglish
PublisherLulu.com
Release dateDec 3, 2022
ISBN9781387433407
Advanced Opensees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction

Related to Advanced Opensees Algorithms, Volume 1

Related ebooks

Civil Engineering For You

View More

Related articles

Reviews for Advanced Opensees Algorithms, Volume 1

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Advanced Opensees Algorithms, Volume 1 - Mahdi Alborzi Verki

    Advanced OpenSees Algorithms, Volume 1: Probability Analysis Of High Pier Cable-Stayed Bridge Under Multiple-Support Excitations, And Liquefaction

    Third Edition

    Copyright © 2022 [Mahdi Alborzi Verki]

    All rights reserved.

    ISBN: [9 781387 433407]

    To learn more about advanced algorithms, you can contact me through the following email address: alborzi1980@gmail.com

    Chapter 1: Quincy Bayview Cable-Stayed Bridge (Illinois state) Specification

    The Bayview Bridge which was built at $32 million, $3 million over budget, is a cable-stayed bridge bringing westbound U.S. Route 24 (US 24) over the Mississippi River. It connects the cities of West Quincy, Missouri, and Quincy, Illinois. Quincy Memorial Bridge serves Eastbound US-24.

    Coordinates      39°56′00″N 91°25′17″W

    Carries      2 lanes of Westbound  US 24

    Crosses      Mississippi River

    Locale      West Quincy, Missouri and Quincy, Illinois

    Other name(s)      Quincy Bayview Bridge

    Maintained by      Illinois Department of Transportation

    Characteristics

    Design      Cable-stayed bridge

    Total length      4,507 feet (1,374 m)

    Width      27 feet (8 m)

    Longest span      900 feet (274 m)

    Clearance below      63 feet (19 m)

    Quincy Bayview Bridge (Illinois) at Night

    Quincy Bayview Bridge location

    A River city along the Mississippi, Quincy was once the second-largest city in Illinois.

    Suspension bridge

    A suspension bridge is a type of bridge in which the deck is hung below suspension cables on vertical suspenders. The first modern examples of this type of bridge were built in the early 1800s. Simple suspension bridges, which lack vertical suspenders, have a long history in many mountainous parts of the world. In addition to the type of bridge shown in the figure, which is commonly called a suspension bridge, there are other types of suspension bridges. The bridge shown in the figure below has cables suspended between the towers, with vertical suspension cables carrying the live and dead loads of the deck below, over which traffic passes. This arrangement allows the deck to be level or arched up for extra clearance. Like other types of suspension bridges, this type is often constructed without the use of falsework. The cables should be anchored at each end of suspension bridges since any load applied to the bridge is transformed into tension in them.

    The 1915 Çanakkale Bridge on the Dardanelles strait in Turkey, connecting Europe and Asia, is the longest suspension bridge in the world.

    Cable-stayed bridge

    A cable-stayed bridge has one or more towers (or pylons), from which cables support the bridge deck. Distinctive features are the cables or stays, which run directly from the tower to the deck, typically forming a fan-like pattern or a series of parallel lines. This is in contrast to the modern suspension bridge, where the cables supporting the deck are suspended vertically from the main cable, anchored at both ends of the bridge, and running between the towers. The cable-stayed bridge is optimal for spans longer than cantilever bridges and shorter than suspension bridges. This is the range within which cantilever bridges would rapidly grow heavier, and suspension bridge cabling would be more costly. Cable-stayed bridges were being designed and constructed by the late 16th century, and the form found wide use in the late 19th century. Early examples, including the Brooklyn Bridge, often combined features from both the cable-stayed and suspension designs. Cable-stayed designs fell from favor in the early 20th century as larger gaps were bridged using pure suspension designs, and shorter ones using various systems built of reinforced concrete. It returned to prominence in the later 20th century when the combination of new materials, larger construction machinery, and the need to replace older bridges all lowered the relative price of these designs.

    The Russky Bridge in Vladivostok has a central span of 1104 metres. It is the world's longest cable-stayed bridge.

    Comparison Cable-stayed bridges with suspension bridges

    Cable-stayed bridges may appear to be similar to suspension bridges, but they are quite different in principle and construction. In suspension bridges, large main cables (normally two) hang between the towers and are anchored at each end to the ground. This can be difficult to implement when ground conditions are poor. The main cables, which are free to move on bearings in the towers, bear the load of the bridge deck. Before the deck is installed, the cables are under tension from their weights. Along the main cables smaller cables or rods connect to the bridge deck, which is lifted in sections. As this is done, the tension in the cables increases, as it does with the live load of traffic crossing the bridge. The tension on the main cables is transferred to the ground at the anchorages and by downward compression on the towers.

    Chapter 2: Some concepts related to earthquake engineering

    Incremental dynamic analysis

    Incremental dynamic analysis (IDA) is a computational analysis method of earthquake engineering for performing a comprehensive assessment of the behavior of structures under seismic loads. It has been developed to build upon the results of probabilistic seismic hazard analysis in order to estimate the seismic risk faced by a given structure. It can be considered to be the dynamic equivalent of the static pushover analysis. IDA involves performing multiple nonlinear dynamic analyses of a structural model under a suite of ground motion records, each scaled to several levels of seismic intensity. The scaling levels are appropriately selected to force the structure through the entire range of behavior, from elastic to inelastic and finally to global dynamic instability, where the structure essentially experiences collapse. Appropriate postprocessing can present the results in terms of IDA curves, one for each ground motion record, of the seismic intensity, typically represented by a scalar Intensity Measure (IM), versus the structural response, as measured by an engineering demand parameter (EDP). Possible choices for the IM are scalar (or rarely vector) quantities that relate to the severity of the recorded ground motion and scale linearly or nonlinearly with its amplitude. The IM is properly chosen well so that appropriate hazard maps (hazard curves) can be produced for them by probabilistic seismic hazard analysis. In addition, the IM should be correlated with the structural response of interest to decrease the number of required response history analyses. Possible choices are the peak ground acceleration, peak ground velocity or Arias intensity, but the most widely used is the 5%-damped spectral acceleration at the first-mode period of the structure. The results of the recent studies show that spectrum intensity (SI) is an appropriate IM. The EDP can be any structural response quantity that relates to structural, non-structural or contents' damage. Typical choices are the maximum (over all stories and time) interstory drift, the individual peak story drifts and the peak floor accelerations.

    IDA grew out of the typical practice of scaling accelerograms by multiplying with a constant factor to represent more or less severe ground motions than the ones that were recorded at a site. Since the natural recordings available are never enough to cover all possible needs, scaling is a simple, yet potentially problematic method (if misused) to fill-in gaps in the current catalog of events. Still, in most cases, researchers would scale only a small set of three to seven records and typically only once, just to get an estimate of response in the area of interest. In the wake of the damage wrought by the 1994 Northridge earthquake, the SAC/FEMA project was launched to resolve the issue of poor performance of steel moment-resisting frames due to the fracturing beam-column connections. Within the creative environment of research cooperation, the idea of subjecting a structure to a wider range of scaling emerged. Initially, the method was called Dynamic Pushover and it was conceived as a way to estimate a proxy for the global collapse of the structure. It was later recognized that such a method would also enable checking for multiple limit-states, e.g. for life-safety, as is the standard for most seismic design methods, but also for lower and higher levels of intensity that represent different threat levels, such as immediate-occupancy and collapse-prevention. Thus, the idea for Incremental Dynamic Analysis was born, which was mainly adopted and later popularized by researchers at the John A. Blume Earthquake Research Center of Stanford University. This has now met with wider recognition in the earthquake research community and has spawned several different methods and concepts for estimating structural performance. A substantial debate has been raised regarding the potential bias in the IDA results due to using the scaled ground motions records that do not appropriately characterize the seismic hazard of the considered site over different earthquake intensity levels.

    Vulnerability assessment

    A vulnerability assessment is the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system. Examples of systems for which vulnerability assessments are performed include, but are not limited to, information technology systems, energy supply systems, water supply systems, transportation systems, and communication systems. Such assessments may be conducted on behalf of a range of different organizations, from small businesses up to large regional infrastructures. Vulnerability from the perspective of disaster management means assessing the threats from potential hazards to the population and to infrastructure. It may be conducted in the political, social, economic or environmental fields. Vulnerability assessment has many things in common with risk assessment. Assessments are typically performed according to the following steps:

    - Cataloging assets and capabilities (resources) in a system.

    - Assigning quantifiable value (or at least rank order) and importance to those resources

    - Identifying the vulnerabilities or potential threats to each resource

    - Mitigating or eliminating the most serious vulnerabilities for the most valuable resources

    Classical risk analysis is principally concerned with investigating the risks surrounding a plant (or some other object), its design and operations. Such analysis tends to focus on causes and the direct consequences for the studied object. Vulnerability analysis, on the other hand, focuses both on consequences for the object itself and on primary and secondary consequences for the surrounding environment. It also concerns itself with the possibilities of reducing such consequences and of improving the capacity to manage future incidents. In general, a vulnerability analysis serves to categorize key assets and drive the risk management process. In the United States, guides providing valuable considerations and templates for completing a vulnerability assessment are available from numerous agencies including the Department of Energy, the Environmental Protection Agency, and the United States Department of Transportation, just to name a few.

    Several academic research papers including Turner, Ford and Smith, Adger, Fraser and Patt amongst others, have provided a detail review of the diverse epistemologies and methodologies in vulnerability research. Turner et al. for example proposed a framework that illustrates the complexity and interactions involved in vulnerability analysis, draws attention to the array of factors and linkages that potentially affects the vulnerability of a couple of human–environment systems. The framework makes use of nested flowcharts to show how social and environmental forces interact to create situations vulnerable to sudden changes. Ford and Smith, propose an analytical framework, based on research with Canadian arctic communities. They suggest that, the first stage is to assess current vulnerability by documenting exposures and current adaptive strategies. This should be followed by a second stage that estimates directional changes in those current risk factors and characterizes the community's future adaptive capacity. Ford and Smith's framework utilizes historic information including how communities have experienced and addressed climatic hazards, with information on what conditions are likely to change, and what constraints and opportunities there are for future adaptation.

    Vulnerability assessment is a process of defining, identifying and classifying the security holes in information technology systems. An attacker can exploit a vulnerability to violate the security of a system. Some known vulnerabilities are Authentication Vulnerability, Authorization Vulnerability and Input Validation Vulnerability. Before deploying a system, it first must go through from a series of vulnerability assessments that will ensure that the build system is secure from all the known security risks. When a new vulnerability is discovered, the system administrator can again perform an assessment, discover which modules are vulnerable, and start the patch process. After the fixes are in place, another assessment can be run to verify that the vulnerabilities were actually resolved. This cycle of assess, patch, and re-assess has become the standard method for many organizations to manage their security issues. The primary purpose of the assessment is to find the vulnerabilities in the system, but the assessment report conveys to stakeholders that the system is secured from these vulnerabilities. If an intruder gained access to a network consisting of vulnerable Web servers, it is safe to assume that he gained access to those systems as well. Because of assessment report, the security administrator will be able to determine how intrusion occurred, identify compromised assets and take appropriate security measures to prevent critical damage to the system.

    Assessment types

    Depending on the system a vulnerability assessment can have many types and level.

    Host assessment

    A host assessment looks for system-level vulnerabilities such as insecure file permissions, application level bugs, backdoor and Trojan horse installations. It requires specialized tools for the operating system and software packages being used, in addition to administrative access to each system that should be tested. Host assessment is often very costly in term of time, and thus is only used in the assessment of critical systems. Tools like COPS and Tiger are popular in host assessment.

    Network assessment

    In a network assessment one assess the network for known vulnerabilities. It locates all systems on a network, determines what network services are in use, and then analyzes those services for potential vulnerabilities. This process does not require any configuration changes on the systems being assessed. Unlike host assessment, network assessment requires little computational cost and effort.

    Standardized Government Vulnerability Assessment Services

    The GSA (also known as the General Services Administration) has standardized the Risk and Vulnerability Assessments (RVA) service as a pre-vetted support service, to rapidly conduct assessments of threats and vulnerabilities, determine deviations from acceptable configurations, enterprise or local policy, assess the level of risk, and develop and/or recommends appropriate mitigation countermeasures in operational and non-operational situations. This standardized service offers the following pre-vetted support services:

    - Network Mapping

    - Vulnerability Scanning

    - Phishing Assessment

    - Wireless Assessment

    - Web Application Assessment

    - Operating System Security Assessment (OSSA)

    - Database Assessment

    - Penetration Testing

    These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS) and are listed at the US GSA Advantage website. This effort has identified key service providers which have been technically reviewed and vetted to provide these advanced services. This GSA service is intended to improve the rapid ordering and deployment of these services, reduce US government contract duplication, and to protect and support the US infrastructure in a more timely and efficient manner. 132-45D Risk and Vulnerability Assessment identifies, quantifies, and prioritizes the risks and vulnerabilities in a system. A risk assessment identifies recognized threats and threat actors and the probability that these factors will result in exposure or loss.

    Chapter 3: Simulate Liquifaction by a Finite Element Software

    Simulating liquefaction using finite element software involves modeling the behavior of the soil under dynamic loading conditions. Here are the basic steps involved in simulating liquefaction using a finite element software:

    Define the geometry of the soil layer: The soil layer should be modeled as a three-dimensional (3D) mesh, with each element representing a small portion of the soil.

    Define the soil properties: The soil properties, such as shear strength, bulk modulus, and damping ratio, should be defined based on the soil type and characteristics.

    Define the loading conditions: The loading conditions, such as the magnitude and frequency of the earthquake, should be defined to simulate the dynamic loading on the soil.

    Apply the dynamic loading: The dynamic loading should be applied to the soil layer through the base or the sides of the soil layer.

    Simulate the liquefaction: The simulation should be run to determine the response of the soil layer to the dynamic loading. In particular, the software should be able to track the development of excess pore water pressure, which is a key factor in the onset of liquefaction.

    Analyze the results: The results of the simulation should be analyzed to determine the extent of the liquefaction and the potential impact on any structures built on the soil.

    There are various finite element software packages available that can be used to simulate liquefaction, such as Opensee, Abaqus, Plaxis, and FLAC. These software packages typically require specialized training to use effectively.

    OpenSees (Open System for Earthquake Engineering Simulation) is a free, open-source finite element software package that is widely used in earthquake engineering research and practice. OpenSees can be used to simulate the behavior of soils and structures under seismic loading, including the liquefaction of soils.

    To simulate liquefaction using OpenSees, the following steps are typically involved:

    Define the soil properties: The soil properties, such as shear strength, bulk modulus, and damping ratio, should be defined based on the soil type and characteristics.

    Define the soil layer geometry: The soil layer should be modeled as a three-dimensional (3D) mesh, with each element representing a small portion of the soil. The soil layer can be modeled using a range of element types, such as the bbarquad element, which is a 9-node quadrilateral element that is commonly used for soil modeling.

    Define the earthquake excitation: The dynamic loading caused by the earthquake should be defined using a ground motion record or a time history function. The excitation can be applied to the base of the soil layer or at any point along the soil layer.

    Define the soil-water interaction: The interaction between the soil and water is a critical factor in simulating liquefaction. OpenSees has built-in constitutive models for modeling the soil-water interaction, such as the Pressure-Dependent Multi-Yield (PDMY) model.

    Define the liquefaction criteria: The onset of liquefaction can be determined using various criteria, such as the cyclic resistance ratio (CRR) or the pore pressure ratio (PPR). These criteria should be defined based on the soil properties and the intended analysis.

    Analyze the results: The results of the simulation should be analyzed to determine the extent of the liquefaction and the potential impact on any structures built on the soil.

    Chapter 4: Liquefaction Criteria in the Building Codes

    Building codes typically include criteria for assessing the potential for liquefaction in seismically active regions. These criteria are designed to help engineers and designers evaluate the risk of liquefaction and design buildings and infrastructure to withstand the associated hazards.

    The following are some examples of liquefaction criteria commonly used in building codes:

    Seismic Hazard Maps: Many building codes use seismic hazard maps to identify regions where liquefaction is likely to occur. These maps are based on geological and seismic data and provide information on the likelihood and intensity of earthquakes in a given area.

    Standard Penetration Test (SPT) and Cone Penetration Test (CPT): The SPT and CPT are commonly used in building codes as field tests to

    Enjoying the preview?
    Page 1 of 1