Information

Please explain this notation MW around 9 MDa


I am a computational science student and was reading about the structure of Pyruvate dehydrogenase (PDH).

link to article

The article mentions "PDH is a large complex (MW around 9 MDa) consisting of multiple copies of three enzymes."

What does MW around 9 MDa denote ?

I am not a regular student of biology but rather an engineering student so please try to provide simple explanations.


The molecular weight of PDH enzyme is approximately 9$ imes$10$^6$ gm/mol

The Dalton (or atomic mass unit (amu) ) is a unit of mass defined as 1/12 weight of carbon-12 atom in ground state. 1 Da = 1/12 m(12C)

The number of atoms in 1 mole is Avogadro's number (6.023 x 10^23) so weight of 1 carbon atom is = (12/ 6.023 x 10^23 ) g

This means 1Da (1 amu) = 1/12 x (12/ 6.023 x 10^23 ) g = 1/ 6.023 x 10^23 g which is g/mole

(1 MOLE = 6.023 x 10^23 )

=> 1 Da = 1g/mole


Nucleotide

Nucleotides are organic molecules consisting of a nucleoside and a phosphate. They serve as monomeric units of the nucleic acid polymers - deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), both of which are essential biomolecules within all life-forms on Earth. Nucleotides are obtained in the diet and are also synthesized from common nutrients by the liver. [1]

Nucleotides are composed of three subunit molecules: a nucleobase, a five-carbon sugar (ribose or deoxyribose), and a phosphate group consisting of one to three phosphates. The four nucleobases in DNA are guanine, adenine, cytosine and thymine in RNA, uracil is used in place of thymine.

Nucleotides also play a central role in metabolism at a fundamental, cellular level. They provide chemical energy—in the form of the nucleoside triphosphates, adenosine triphosphate (ATP), guanosine triphosphate (GTP), cytidine triphosphate (CTP) and uridine triphosphate (UTP)—throughout the cell for the many cellular functions that demand energy, including: amino acid, protein and cell membrane synthesis, moving the cell and cell parts (both internally and intercellularly), cell division, etc. [2] In addition, nucleotides participate in cell signaling (cyclic guanosine monophosphate or cGMP and cyclic adenosine monophosphate or cAMP), and are incorporated into important cofactors of enzymatic reactions (e.g. coenzyme A, FAD, FMN, NAD, and NADP + ).

In experimental biochemistry, nucleotides can be radiolabeled using radionuclides to yield radionucleotides.


Contents

Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow: [7]

  • C2: ethylene only
  • C3: propylene only
  • C4: 3 isomers: 1-butene, 2-butene, and isobutylene
  • C5: 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene
  • C6: 13 isomers: 1-hexene, 2-hexene, 3-hexene, methylpentene (7 isomers), dimethylbutene (3 isomers)
  • C7: 27 isomers (calculated)
  • C12: 2,281 isomers (calculated)
  • C31: 193,706,542,776 isomers (calculated)

Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbons particularly within the larger molecules (from C5). The number of potential isomers increases rapidly with additional carbon atoms.

Bonding Edit

A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), [1] but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond. [8]

Each carbon of the double bond uses its three sp 2 hybrid orbitals to form sigma bonds to three atoms (the other carbon and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp² hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond.

Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the EZ notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties.

Shape Edit

As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbons of the double bond. For example, the C–C–C bond angle in propylene is 123.9°.

For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. [9] Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, [10] bicyclic systems require S ≥ 7 for stability [9] and tricyclic systems require S ≥ 11. [11]

Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbons are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass.

Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. The binding of cupric ion to the olefin in the mammalian olfactory receptor MOR244-3 is implicated in the smell of alkenes (as well as thiols). Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper. [12]

Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation.

Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic site are important too.

Addition reactions Edit

Hydrogenation and related hydroelementations Edit

Hydrogenation of alkenes produces the corresponding alkanes. The reaction is sometimes carried out under pressure and at elevated temperature. Metallic catalysts are almost always required. Common industrial catalysts are based on platinum, nickel, and palladium. A large scale application is the production of margarine.

Aside from the addition of H-H across the double bond, many other H-X's can be added. These processes are often of great commercial significance. One example is the addition of H-SiR3, i.e., hydrosilylation. This reaction is used to generate organosilicon compounds. Another reaction is hydrocyanation, the addition of H-CN across the double bond.

Hydration Edit

Hydration, the addition of water across the double bond of alkenes, yields alcohols. The reaction is catalyzed by phosphoric acid or sulfuric acid. This reaction is carried out on an industrial scale to produce synthetic ethanol.

Halogenation Edit

In electrophilic halogenation the addition of elemental bromine or chlorine to alkenes yields vicinal dibromo- and dichloroalkanes (1,2-dihalides or ethylene dihalides), respectively. The decoloration of a solution of bromine in water is an analytical test for the presence of alkenes:

Related reactions are also used as quantitative measures of unsaturation, expressed as the bromine number and iodine number of a compound or mixture.

Hydrohalogenation Edit

Hydrohalogenation is the addition of hydrogen halides, such as HCl or HI, to alkenes to yield the corresponding haloalkanes:

If the two carbon atoms at the double bond are linked to a different number of hydrogen atoms, the halogen is found preferentially at the carbon with fewer hydrogen substituents. This patterns is known as Markovnikov's rule. The use of radical initiators or other compounds can lead to the opposite product result. Hydrobromic acid in particular is prone to forming radicals in the presence of various impurities or even atmospheric oxygen, leading to the reversal of the Markovnikov result: [13]

Halohydrin formation Edit

Alkenes react with water and halogens to form halohydrins by an addition reaction. Markovnikov regiochemistry and anti-stereochemistry occur.

Oxidation Edit

Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides:

For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of catalysts:

Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes an mild reductant, such as dimethylsulfide (SMe2):

When treated with a hot concentrated, acidified solution of KMnO4, alkenes are cleaved ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene.

The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants:

In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). [14] These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide:

Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide:

Polymerization Edit

Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate as the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber.

Metal complexation Edit

Alkenes are ligands in transition metal alkene complexes. The two carbon centres bond to the metal using the C–C pi- and pi*-orbitals. Mono- and diolefins are often used as ligands in stable complexes. Cyclooctadiene and norbornadiene are popular chelating agents, and even ethylene itself is sometimes used as a ligand, for example, in Zeise's salt. In addition, metal–alkene complexes are intermediates in many metal-catalyzed reactions including hydrogenation, hydroformylation, and polymerization.

Reaction overview Edit

Reaction name Product Comment
Hydrogenation alkanes addition of hydrogen
Hydroalkenylation alkenes hydrometalation / insertion / beta-elimination by metal catalyst
Halogen addition reaction 1,2-dihalide electrophilic addition of halogens
Hydrohalogenation (Markovnikov) haloalkanes addition of hydrohalic acids
Anti-Markovnikov hydrohalogenation haloalkanes free radicals mediated addition of hydrohalic acids
Hydroamination amines addition of N–H bond across C–C double bond
Hydroformylation aldehydes industrial process, addition of CO and H2
Hydrocarboxylation and Koch reaction carboxylic acid industrial process, addition of CO and H2O.
Carboalkoxylation ester industrial process, addition of CO and alcohol.
alkylation ester industrial process: alkene alkylating carboxylic acid with silicotungstic acid the catalyst.
Sharpless bishydroxylation diols oxidation, reagent: osmium tetroxide, chiral ligand
Woodward cis-hydroxylation diols oxidation, reagents: iodine, silver acetate
Ozonolysis aldehydes or ketones reagent: ozone
Olefin metathesis alkenes two alkenes rearrange to form two new alkenes
Diels–Alder reaction cyclohexenes cycloaddition with a diene
Pauson–Khand reaction cyclopentenones cycloaddition with an alkyne and CO
Hydroboration–oxidation alcohols reagents: borane, then a peroxide
Oxymercuration-reduction alcohols electrophilic addition of mercuric acetate, then reduction
Prins reaction 1,3-diols electrophilic addition with aldehyde or ketone
Paterno–Büchi reaction oxetanes photochemical reaction with aldehyde or ketone
Epoxidation epoxide electrophilic addition of a peroxide
Cyclopropanation cyclopropanes addition of carbenes or carbenoids
Hydroacylation ketones oxidative addition / reductive elimination by metal catalyst
Hydrophosphination phosphines

Industrial methods Edit

Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons). [15]

Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. [1] This is the reverse of the catalytic hydrogenation of alkenes.

This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy.

Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum.

Elimination reactions Edit

One of the principal methods for alkene synthesis in the laboratory is the room elimination of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism, [16] but α-eliminations are also known.

The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer. [17]

Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene:

An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis).

Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example: [18]

The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product.

Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate.

Synthesis from carbonyl compounds Edit

Another important method for alkene synthesis involves construction of a new carbon–carbon double bond by coupling of a carbonyl compound (such as an aldehyde or ketone) to a carbanion equivalent. Such reactions are sometimes called olefinations. The most well-known of these methods is the Wittig reaction, but other related methods are known, including the Horner–Wadsworth–Emmons reaction.

The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide. The reaction is quite general and many functional groups are tolerated, even esters, as in this example: [19]

Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds in this case, even esters and amides react.

A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction.

A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction).

Synthesis from alkenes Edit

The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important.

Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process: [20]

Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. [21] It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond.

From alkynes Edit

Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene. [22]

For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives.

Rearrangements and related reactions Edit

Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement.

In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene.

Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. [3] Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes. [4]

To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe.

For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply:

  1. Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise:
  2. Number the carbons in that chain starting from the end that is closest to the double bond.
  3. Define the location k of the double bond as being the number of its first carbon.
  4. Name the side groups (other than hydrogen) according to the appropriate rules.
  5. Define the position of each side group as the number of the chain carbon it is attached to.
  6. Write the position and name of each side group.
  7. Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene".

The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene", rather than before the suffix ("pent-2-ene").

The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: (H
3 C)
3 C– CH
2 – CH
3 is "2,2-dimethyl pentane", whereas (H
3 C)
3 C– CH = CH
2 is "3,3-dimethyl 1-pentene".

More complex rules apply for polyenes and cycloalkenes. [5]

Cis–trans isomerism Edit

If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cistrans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of"]] or trans- ("across", "on the other side of") before the name, respectively as in cis-2-pentene or trans-2-butene.

More generally, cistrans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite") if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'". [23]

Groups containing C=C double bonds Edit

IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group. [5]


Contents

Historically, simulations used in different fields developed largely independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept.

Physical simulation refers to simulation in which physical objects are substituted for the real thing (some circles [5] use the term for computer simulations modelling selected laws of physics, but this article does not). These physical objects are often chosen because they are smaller or cheaper than the actual object or system.

Interactive simulation is a special kind of physical simulation, often referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or driving simulator.

Continuous simulation is a simulation based on continuous time, rather than discrete time steps, using numerical integration of differential equations. [6]

Discrete-event simulation studies systems whose states change their values only at discrete times. [7] For example, a simulation of an epidemic could change the number of infected people at time instants when susceptible individuals get infected or when infected individuals recover.

Stochastic simulation is a simulation where some variable or process is subject to random variations and is projected using Monte Carlo techniques using pseudo-random numbers. Thus replicated runs with the same boundary conditions will each produce different results within a specific confidence band. [6]

Deterministic simulation is a simulation which is not stochastic: thus the variables are regulated by deterministic algorithms. So replicated runs from the same boundary conditions always produce identical results.

Hybrid Simulation (sometime Combined Simulation) corresponds to a mix between Continuous and Discrete Event Simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities. [8]

A stand alone simulation is a simulation running on a single workstation by itself.

A distributed simulation is one which uses more than one computer simultaneously, in order to guarantee access from/to different resources (e.g. multi-users operating different systems, or distributed data sets) a classical example is Distributed Interactive Simulation (DIS). [9]

Parallel Simulation speeds up a simulation's execution by concurrently distributing its workload over multiple processors, as in High-Performance Computing. [10]

Interoperable Simulation where multiple models, simulators (often defined as Federates) interoperate locally, distributed over a network a classical example is High-Level Architecture. [11] [12]

Modeling & Simulation as a Service where simulation is accessed as a service over the web. [13]

Modeling, interoperable Simulation and Serious Games where Serious Games Approaches (e.g. Game Engines and Engagement Methods) are integrated with Interoperable Simulation. [14]

Simulation Fidelity is used to describe the accuracy of a simulation and how closely it imitates the real-life counterpart. Fidelity is broadly classified as one of three categories: low, medium, and high. Specific descriptions of fidelity levels are subject to interpretation, but the following generalizations can be made:

  • Low – the minimum simulation required for a system to respond to accept inputs and provide outputs
  • Medium – responds automatically to stimuli, with limited accuracy
  • High – nearly indistinguishable or as close as possible to the real system

Human in the loop simulations can include a computer simulation as a so-called synthetic environment. [17]

Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure. This was the best and fastest method to identify the failure cause.

A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to virtually investigate the behaviour of the system under study. [1]

Computer simulation has become a useful part of modeling many natural systems in physics, chemistry and biology, [18] and human systems in economics and social science (e.g., computational sociology) as well as in engineering to gain insight into the operation of those systems. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation. In such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment.

Traditionally, the formal modeling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation, the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible.

Several software packages exist for running computer-based simulation modeling (e.g. Monte Carlo simulation, stochastic modeling, multimethod modeling) that makes all the modeling almost effortless.

Modern usage of the term "computer simulation" may encompass virtually any computer-based representation.

Computer science Edit

In computer science, simulation has some specialized meanings: Alan Turing used the term "simulation" to refer to what happens when a universal machine executes a state transition table (in modern terminology, a computer runs a program) that describes the state transitions, inputs and outputs of a subject discrete-state machine. [19] The computer simulates the subject machine. Accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics.

Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an emulator, is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see Computer architecture simulator and Platform virtualization). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer's operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will.

Simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values.

In the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies.

Simulation is extensively used for educational purposes. [2] It is used for cases where it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a "safe" virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system.

Simulations in education are somewhat like training simulations. They focus on specific tasks. The term 'microworld' is used to refer to educational simulations which model some abstract concept rather than simulating a realistic object or environment, or in some cases model a real-world environment in a simplistic way so as to help a learner develop an understanding of the key concepts. Normally, a user can create some sort of construction within the microworld that will behave in a way consistent with the concepts being modeled. Seymour Papert was one of the first to advocate the value of microworlds, and the Logo programming environment developed by Papert is one of the most well-known microworlds.

Project Management Simulation is increasingly used to train students and professionals in the art and science of project management. Using simulation for project management training improves learning retention and enhances the learning process. [20] [21]

Social simulations may be used in social science classrooms to illustrate social and political processes in anthropology, economics, history, political science, or sociology courses, typically at the high school or university level. These may, for example, take the form of civics simulations, in which participants assume roles in a simulated society, or international relations simulations in which participants engage in negotiations, alliance formation, trade, diplomacy, and the use of force. Such simulations might be based on fictitious political systems, or be based on current or historical events. An example of the latter would be Barnard College's Reacting to the Past series of historical educational games. [22] The National Science Foundation has also supported the creation of reacting games that address science and math education. [23] In social media simulations, participants train communication with critics and other stakeholders in a private environment.

In recent years, there has been increasing use of social simulations for staff training in aid and development agencies. The Carana simulation, for example, was first developed by the United Nations Development Programme, and is now used in a very revised form by the World Bank for training staff to deal with fragile and conflict-affected countries. [24]

Military uses for simulation often involve aircraft or armoured fighting vehicles, but can also target small arms and other weapon systems training. Specifically, virtual firearms ranges have become the norm in most military training processes and there is a significant amount of data to suggest this is a useful tool for armed professionals. [25]

Virtual simulations represent a specific category of simulation that utilizes simulation equipment to create a simulated world for the user. Virtual simulations allow users to interact with a virtual world. Virtual worlds operate on platforms of integrated software and hardware components. In this manner, the system can accept input from the user (e.g., body tracking, voice/sound recognition, physical controllers) and produce output to the user (e.g., visual display, aural display, haptic display) . [26] Virtual Simulations use the aforementioned modes of interaction to produce a sense of immersion for the user.

Virtual simulation input hardware Edit

There is a wide variety of input hardware available to accept user input for virtual simulations. The following list briefly describes several of them:

Body tracking: The motion capture method is often used to record the user's movements and translate the captured data into inputs for the virtual simulation. For example, if a user physically turns their head, the motion would be captured by the simulation hardware in some way and translated to a corresponding shift in view within the simulation.

    and/or gloves may be used to capture movements of users body parts. The systems may have sensors incorporated inside them to sense movements of different body parts (e.g., fingers). Alternatively, these systems may have exterior tracking devices or marks that can be detected by external ultrasound, optical receivers or electromagnetic sensors. Internal inertial sensors are also available on some systems. The units may transmit data either wirelessly or through cables. can also be used to detect eye movements so that the system can determine precisely where a user is looking at any given instant.

Physical controllers: Physical controllers provide input to the simulation only through direct manipulation by the user. In virtual simulations, tactile feedback from physical controllers is highly desirable in a number of simulation environments.

    can be used to capture the users locomotion as they walk or run.
  • High fidelity instrumentation such as instrument panels in virtual aircraft cockpits provides users with actual controls to raise the level of immersion. For example, pilots can use the actual global positioning system controls from the real device in a simulated cockpit to help them practice procedures with the actual device in the context of the integrated cockpit system.

Voice/sound recognition: This form of interaction may be used either to interact with agents within the simulation (e.g., virtual people) or to manipulate objects in the simulation (e.g., information). Voice interaction presumably increases the level of immersion for the user.

  • Users may use headsets with boom microphones, lapel microphones or the room may be equipped with strategically located microphones.

Current research into user input systems Edit

Research in future input systems holds a great deal of promise for virtual simulations. Systems such as brain–computer interfaces (BCIs) offer the ability to further increase the level of immersion for virtual simulation users. Lee, Keinrath, Scherer, Bischof, Pfurtscheller [27] proved that naïve subjects could be trained to use a BCI to navigate a virtual apartment with relative ease. Using the BCI, the authors found that subjects were able to freely navigate the virtual environment with relatively minimal effort. It is possible that these types of systems will become standard input modalities in future virtual simulation systems.

Virtual simulation output hardware Edit

There is a wide variety of output hardware available to deliver a stimulus to users in virtual simulations. The following list briefly describes several of them:

Visual display: Visual displays provide the visual stimulus to the user.

  • Stationary displays can vary from a conventional desktop display to 360-degree wrap-around screens to stereo three-dimensional screens. Conventional desktop displays can vary in size from 15 to 60 inches (380 to 1,520 mm). Wrap around screens is typically utilized in what is known as a cave automatic virtual environment (CAVE). Stereo three-dimensional screens produce three-dimensional images either with or without special glasses—depending on the design. (HMDs) have small displays that are mounted on headgear worn by the user. These systems are connected directly into the virtual simulation to provide the user with a more immersive experience. Weight, update rates and field of view are some of the key variables that differentiate HMDs. Naturally, heavier HMDs are undesirable as they cause fatigue over time. If the update rate is too slow, the system is unable to update the displays fast enough to correspond with a quick head turn by the user. Slower update rates tend to cause simulation sickness and disrupt the sense of immersion. Field of view or the angular extent of the world that is seen at a given moment field of view can vary from system to system and has been found to affect the user's sense of immersion.

Aural display: Several different types of audio systems exist to help the user hear and localize sounds spatially. Special software can be used to produce 3D audio effects 3D audio to create the illusion that sound sources are placed within a defined three-dimensional space around the user.

  • Stationary conventional speaker systems may be used to provide dual or multi-channel surround sound. However, external speakers are not as effective as headphones in producing 3D audio effects. [26]
  • Conventional headphones offer a portable alternative to stationary speakers. They also have the added advantages of masking real-world noise and facilitate more effective 3D audio sound effects. [26] [dubious – discuss]

Haptic display: These displays provide a sense of touch to the user (haptic technology). This type of output is sometimes referred to as force feedback.

  • Tactile tile displays use different types of actuators such as inflatable bladders, vibrators, low-frequency sub-woofers, pin actuators and/or thermo-actuators to produce sensations for the user.
  • End effector displays can respond to users inputs with resistance and force. [26] These systems are often used in medical applications for remote surgeries that employ robotic instruments. [28]

Vestibular display: These displays provide a sense of motion to the user (motion simulator). They often manifest as motion bases for virtual vehicle simulation such as driving simulators or flight simulators. Motion bases are fixed in place but use actuators to move the simulator in ways that can produce the sensations pitching, yawing or rolling. The simulators can also move in such a way as to produce a sense of acceleration on all axes (e.g., the motion base can produce the sensation of falling).

Medical simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery [29] and trauma care. They are also important to help on prototyping new devices [30] for biomedical engineering problems. Currently, simulators are applied to research and develop tools for new therapies, [31] treatments [32] and early diagnosis [33] in medicine.

Many medical simulators involve a computer connected to a plastic simulation of the relevant anatomy. [ citation needed ] Sophisticated simulators of this type employ a life-size mannequin that responds to injected drugs and can be programmed to create simulations of life-threatening emergencies. In other simulations, visual components of the procedure are reproduced by computer graphics techniques, while touch-based components are reproduced by haptic feedback devices combined with physical simulation routines computed in response to the user's actions. Medical simulations of this sort will often use 3D CT or MRI scans of patient data to enhance realism. Some medical simulations are developed to be widely distributed (such as web-enabled simulations [34] and procedural simulations [35] that can be viewed via standard web browsers) and can be interacted with using standard computer interfaces, such as the keyboard and mouse.

Another important medical application of a simulator—although, perhaps, denoting a slightly different meaning of simulator—is the use of a placebo drug, a formulation that simulates the active drug in trials of drug efficacy (see Placebo (origins of technical term)).

Improving patient safety Edit

Patient safety is a concern in the medical industry. Patients have been known to suffer injuries and even death due to management error, and lack of using best standards of care and training. According to Building a National Agenda for Simulation-Based Medical Education (Eder-Van Hook, Jackie, 2004), "a health care provider's ability to react prudently in an unexpected situation is one of the most critical factors in creating a positive outcome in medical emergency, regardless of whether it occurs on the battlefield, freeway, or hospital emergency room." Eder-Van Hook (2004) also noted that medical errors kill up to 98,000 with an estimated cost between $37 and $50 million and $17 to $29 billion for preventable adverse events dollars per year.

Simulation is being used to study patient safety, as well as train medical professionals. [36] Studying patient safety and safety interventions in healthcare is challenging, because there is a lack of experimental control (i.e., patient complexity, system/process variances) to see if an intervention made a meaningful difference (Groves & Manges, 2017). [37] An example of innovative simulation to study patient safety is from nursing research. Groves et al. (2016) used a high-fidelity simulation to examine nursing safety-oriented behaviors during times such as change-of-shift report. [36]

However, the value of simulation interventions to translating to clinical practice are is still debatable. [38] As Nishisaki states, "there is good evidence that simulation training improves provider and team self-efficacy and competence on manikins. There is also good evidence that procedural simulation improves actual operational performance in clinical settings." [38] However, there is a need to have improved evidence to show that crew resource management training through simulation. [38] One of the largest challenges is showing that team simulation improves team operational performance at the bedside. [39] Although evidence that simulation-based training actually improves patient outcome has been slow to accrue, today the ability of simulation to provide hands-on experience that translates to the operating room is no longer in doubt. [40] [41] [42]

One of the largest factors that might impact the ability to have training impact the work of practitioners at the bedside is the ability to empower frontline staff (Stewart, Manges, Ward, 2015). [39] [43] Another example of an attempt to improve patient safety through the use of simulations training is patient care to deliver just-in-time service or/and just-in-place. This training consists of 20 minutes of simulated training just before workers report to shift. One study found that just in time training improved the transition to the bedside. The conclusion as reported in Nishisaki (2008) work, was that the simulation training improved resident participation in real cases but did not sacrifice the quality of service. It could be therefore hypothesized that by increasing the number of highly trained residents through the use of simulation training, that the simulation training does, in fact, increase patient safety.

History of simulation in healthcare Edit

The first medical simulators were simple models of human patients. [44]

Since antiquity, these representations in clay and stone were used to demonstrate clinical features of disease states and their effects on humans. Models have been found in many cultures and continents. These models have been used in some cultures (e.g., Chinese culture) as a "diagnostic" instrument, allowing women to consult male physicians while maintaining social laws of modesty. Models are used today to help students learn the anatomy of the musculoskeletal system and organ systems. [44]

In 2002, the Society for Simulation in Healthcare (SSH) was formed to become a leader in international interprofessional advances the application of medical simulation in healthcare [45]

The need for a "uniform mechanism to educate, evaluate, and certify simulation instructors for the health care profession" was recognized by McGaghie et al. in their critical review of simulation-based medical education research. [46] In 2012 the SSH piloted two new certifications to provide recognition to educators in an effort to meet this need. [47]

Type of models Edit

Active models Edit

Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous "Harvey" mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography. [48]

Interactive models Edit

More recently, interactive models have been developed that respond to actions taken by a student or physician. [48] Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgments, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction.

Computer simulators Edit

Simulators have been proposed as an ideal tool for assessment of students for clinical skills. [49] For patients, "cybertherapy" can be used for sessions simulating traumatic experiences, from fear of heights to social anxiety. [50]

Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These "lifelike" simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context. [51] [52]

Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state.

Such a simulator meets the goals of an objective and standardized examination for clinical competence. [53] This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings. [54]

Simulation in entertainment encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature.

History of visual simulation in film and games Edit

Early history (1940s and 1950s) Edit

The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958, a computer game called "Tennis for Two" was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope. [55] This was one of the first electronic video games to use a graphical display.

1970s and early 1980s Edit

Computer-generated imagery was used in the film to simulate objects as early as 1972 in the A Computer Animated Hand, parts of which were shown on the big screen in the 1976 film Futureworld. Many will remember the "targeting computer" that young Skywalker turns off in the 1977 film Star Wars.

The film Tron (1982) was the first film to use computer-generated imagery for more than a couple of minutes. [56]

Advances in technology in the 1980s caused 3D simulation to become more widely used and it began to appear in movies and in computer-based games such as Atari's Battlezone (1980) and Acornsoft's Elite (1984), one of the first wire-frame 3D graphics games for home computers.

Pre-virtual cinematography era (early 1980s to 1990s) Edit

Advances in technology in the 1980s made the computer more affordable and more capable than they were in previous decades, [57] which facilitated the rise of computer such as the Xbox gaming. The first video game consoles released in the 1970s and early 1980s fell prey to the industry crash in 1983, but in 1985, Nintendo released the Nintendo Entertainment System (NES) which became one of the best selling consoles in video game history. [58] In the 1990s, computer games became widely popular with the release of such game as The Sims and Command & Conquer and the still increasing power of desktop computers. Today, computer simulation games such as World of Warcraft are played by millions of people around the world.

In 1993, the film Jurassic Park became the first popular film to use computer-generated graphics extensively, integrating the simulated dinosaurs almost seamlessly into live action scenes.

This event transformed the film industry in 1995, the film Toy Story was the first film to use only computer-generated images and by the new millennium computer generated graphics were the leading choice for special effects in films. [59]

Virtual cinematography (early 2000s–present) Edit

The advent of virtual cinematography in the early 2000s (decade) has led to an explosion of movies that would have been impossible to shoot without it. Classic examples are the digital look-alikes of Neo, Smith and other characters in the Matrix sequels and the extensive use of physically impossible camera runs in The Lord of the Rings (film series) trilogy.

The terminal in the Pan Am (TV series) no longer existed during the filming of this 2011–2012 aired series, which was no problem as they created it in virtual cinematography utilizing automated viewpoint finding and matching in conjunction with compositing real and simulated footage, which has been the bread and butter of the movie artist in and around film studios since the early 2000s.

Computer-generated imagery is "the application of the field of 3D computer graphics to special effects". This technology is used for visual effects because they are high in quality, controllable, and can create effects that would not be feasible using any other technology either because of cost, resources or safety. [60] Computer-generated graphics can be seen in many live-action movies today, especially those of the action genre. Further, computer-generated imagery has almost completely supplanted hand-drawn animation in children's movies which are increasingly computer-generated only. Examples of movies that use computer-generated imagery include Finding Nemo, 300 and Iron Man.

Examples of non-film entertainment simulation Edit

Simulation games Edit

Simulation games, as opposed to other genres of video and computer games, represent or simulate an environment accurately. Moreover, they represent the interactions between the playable characters and the environment realistically. These kinds of games are usually more complex in terms of gameplay. [61] Simulation games have become incredibly popular among people of all ages. [62] Popular simulation games include SimCity and Tiger Woods PGA Tour. There are also flight simulator and driving simulator games.

Theme park rides Edit

Simulators have been used for entertainment since the Link Trainer in the 1930s. [63] The first modern simulator ride to open at a theme park was Disney's Star Tours in 1987 soon followed by Universal's The Funtastic World of Hanna-Barbera in 1990 which was the first ride to be done entirely with computer graphics. [64]

Simulator rides are the progeny of military training simulators and commercial simulators, but they are different in a fundamental way. While military training simulators react realistically to the input of the trainee in real time, ride simulators only feel like they move realistically and move according to prerecorded motion scripts. [64] One of the first simulator rides, Star Tours, which cost $32 million, used a hydraulic motion based cabin. The movement was programmed by a joystick. Today's simulator rides, such as The Amazing Adventures of Spider-Man include elements to increase the amount of immersion experienced by the riders such as: 3D imagery, physical effects (spraying water or producing scents), and movement through an environment. [65]

Manufacturing represents one of the most important applications of simulation. This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipment and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem. [66]

Another important goal of Simulation in Manufacturing Systems is to quantify system performance. Common measures of system performance include the following: [67]


Related wikiHows

10 Effective Ways to Describe Work Ethic in a Resume (Hint: Show, Don't Tell)


Contents

In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. [3] Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (Εύρηκα! Greek "I have found it"). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment.

The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. [4] Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. [5] [6]

From the equation for density (ρ = m/V), mass density has units of mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m 3 ) and the cgs unit of gram per cubic centimetre (g/cm 3 ) are probably the most commonly used units for density. One g/cm 3 is equal to 1000 kg/m 3 . One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density.

A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). [7] However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question.

Homogeneous materials

The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object.

Heterogeneous materials

If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: ρ ( r → ) = d m / d V >)=dm/dV> , where d V is an elementary volume at position r . The mass of the body then can be expressed as

Non-compact materials

In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules.

Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture.

The bulk volume of a material—inclusive of the void fraction—is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions.

Mass divided by bulk volume determines bulk density. This is not the same thing as volumetric mass density.

To determine volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a variable void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling.

In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void.

In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand).

Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known.

In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C similar behavior is observed in silicon at low temperatures.

The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10 −6 bar −1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10 −5 K −1 . This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius.

In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is

where M is the molar mass, P is the pressure, R is the universal gas constant, and T is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature.

In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is :

The density of a solution is the sum of mass (massic) concentrations of the components of that solution.

Mass (massic) concentration of each given component ρi in a solution sums to density of the solution.

Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes:

provided that there is no interaction between the components.

Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients.


Get real results without ever leaving the house.

A shared whiteboard lets you draw, graph functions, write complex equations and share files. Audio and video so clear, it&rsquoll feel like you&rsquore in the same room. Work in a variety of programming languages with the code editor.

Meet with the expert of your choice, anywhere in the country, online or in-person


How to Find the Number of Neutrons in an Atom

This article was co-authored by Bess Ruff, MA. Bess Ruff is a Geography PhD student at Florida State University. She received her MA in Environmental Science and Management from the University of California, Santa Barbara in 2016. She has conducted survey work for marine spatial planning projects in the Caribbean and provided research support as a graduate fellow for the Sustainable Fisheries Group.

There are 12 references cited in this article, which can be found at the bottom of the page.

wikiHow marks an article as reader-approved once it receives enough positive feedback. In this case, 82% of readers who voted found the article helpful, earning it our reader-approved status.

This article has been viewed 1,000,905 times.

Although all atoms of the same element contain the same number of protons, their number of neutrons can vary. Knowing how many neutrons are in a particular atom can help you determine if it's a regular atom of that element or an isotope, which will have either extra or fewer neutrons. [1] X Research source Determining the number of neutrons in an atom is fairly simple and doesn’t even require any experimentation. To calculate the number of neutrons in a regular atom or an isotope, all you need to do is follow these instructions with a periodic table in hand.


Browse

The Stanford Encyclopedia of Philosophy organizes scholars from around the world in philosophy and related disciplines to create and maintain an up-to-date reference work.

Principal Editor: Edward N. Zalta

Current Operations Are Supported By:

  • The Offices of the Provost, the Dean of Humanities and Sciences, and the Dean of Research, Stanford University
  • The SEP Library Fund: containing contributions from the National Endowment for the Humanities and the membership dues of academic and research libraries that have joined SEPIA.
  • The John Perry Fund and The SEP Fund: containing contributions from individual donors.
  • The Friends of the SEP Society Fund: containing membership dues from individuals who have joined to obtain such member benefits as nicely formatted PDF versions of SEP entries.

The SEP gratefully acknowledges founding support from the National Endowment for the Humanities, the National Science Foundation, The American Philosophical Association/Pacific Division, The Canadian Philosophical Association, and the Philosophy Documentation Center. Fundraising efforts were supported by a grant from The William and Flora Hewlett Foundation.


Please explain this notation MW around 9 MDa - Biology

Stockwell, Alan E. Cooper, Paul A.

The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

Previously a hybrid analytical-numerical solution for the general problem of computing transient well flow in vertically heterogeneous aquifers was proposed by the author. The radial component of flow was treated analytically, while the finite-difference technique was used for the vertical flow component only. In the present work the hybrid solution has been modified by replacing the previously assumed uniform well-face gradient (UWG) boundary condition in such a way that the drawdown remains uniform along the well screen. The resulting uniform well-face drawdown (UWD) solution also includes the effects of a finite diameter well, wellbore storage and a thin skin, while partial penetration and vertical heterogeneity are accommodated by the one-dimensional discretization. Solutions are proposed for well flow caused by constant, variable and slug discharges. The model was verified by comparing wellbore drawdowns and well-face flux distributions with published numerical solutions. Differences between UWG and UWD well flow will occur in all situations with vertical flow components near the well, which is demonstrated by considering: (1) partially penetrating wells in confined aquifers, (2) fully penetrating wells in unconfined aquifers with delayed response and (3) layered aquifers and leaky multiaquifer systems. The presented solution can be a powerful tool for solving many well-hydraulic problems, including well tests, flowmeter tests, slug tests and pumping tests. A computer program for the analysis of pumping tests, based on the hybrid analytical-numerical technique and UWG or UWD conditions, is available from the author.

Franks, Bernard J. Phelps, G.G.

Hydrologic investigations of the Floridan aquifer in Duval County, Florida, have shown that an appropriate simplified model of the aquifer system consists of a series of sub aquifers separated by semipermeable beds. Data from more than 20 aquifer tests were reanalyzed by the Hantush modified method, which takes into account leakance from all confining units. Transmissivity values range from 20,000 to 240,000 square feet per day. Leakance was estimated to be 2.5x10 to the minus 6th power and 3.3x10 to the minus 5th power per day for the upper and lower confining units, respectively. Families of steady-state distance- drawdown curves were constructed for three representative transmissivity values based on hypothetical withdrawals from a point source ranging from 5 to 50 million gallons per day. Transient effects were not considered because the system reaches steady-state conditions within the time ranges considered. Drawdown at any point can be estimated by summing the effects of any hypothetical configuration of pumping centers. The accuracy of the parameters was checked by comparing calculated drawdowns in selected observation wells to measured water-level declines. (Woodard-USGS)

Rose, Brien P. Mesa, Matthew G.

Summer drawdown of Beulah Reservoir, Oregon, could adversely affect fish and invertebrate production, limit sport fishing opportunities, and hinder the recovery of threatened species. To assess the impacts of drawdown , we sampled fish and Chironomidae larvae in Beulah Reservoir in the springs of 2006 to 2008. The reservoir was reduced to 68% of full pool in 2006 and to run-of-river level in 2007. From spring 2006 to spring 2007, the catch per unit effort (CPUE) of fyke nets decreased significantly for dace [Rhinichthys spp.] and northern pikeminnow [Ptychocheilus oregonensis], increased significantly for suckers [Catastomus spp.] and white crappies [Pomoxis nigromaculatus], and was similar for redside shiners [Richardsonius balteatus]. CPUE of gillnets either increased significantly or remained similar depending on genera, and the size structure of redside shiners, suckers, and white crappies changed appreciably. From 2007 to 2008, the CPUE of northern pikeminnow, redside shiners, suckers, and white crappies decreased significantly depending on gear and the size structure of most fishes changed. Springtime densities of chironomid larvae in the water column were significantly higher in 2006 than in 2008, but other comparisons were similar. The densities of benthic chironomids were significantly lower in substrates that were frequently dewatered compared to areas that were partially or usually not dewatered. Individuals from frequently dewatered areas were significantly smaller than those from other areas and the densities of benthic chironomids in 2008 were significantly lower than other years. Summer drawdown can reduce the catch and alter the size structure of fishes and chironomid larvae in Beulah Reservoir.

Hauer, Christoph Haimann, Marlene Habersack, Helmut Haun, Stefan Hammer, Andreas Schletterer, Martin

Reservoirs are important in context of an increased demand on renewable energy and water for irrigation and drinking water purposes. Thus reservoir management is an important task. Beside the technical and the economically feasibility ecological factors are important issues. Thus, an integrative monitoring concept was developed and applied during a controlled drawdown of the Gepatsch reservoir in the Austrian Alps.The controlled drawdown (December 2015 - March 2016) was done slowly, with the consequence of moderatesuspended sediment concentrations (SSCs) in the downstream Inn river. The water was released through the penstock towards the turbines and directly into the Inn River. However, to limit the erosional impact on turbines only one Twin-Pelton turbines was operated during the controlled drawdown . The monitoring program itself was subdivided into monitoring of the sediments in the penstock to determine the amount and the composition of sediments which were sluiced through the turbine, monitoring of the turbine itself to quantify the damages of the turbine and a monitoring related to SSCs in the downstream river reach. In order to detect possible changes, measured discharge and turbidity values were examined. In addition, the flow velocity was modelled (1D). The goal was to monitor the observed peaks concerning their temporal shift and to draw conclusions on the storage capacity of fine sediments in the river substrate. Moreover, detailed fine sediment depositions on gravel bars along the Inn river were monitored and the grain size distribution of the river bed was determined. The monitoring started already in April / November 2015 with the aim to survey and analyses the turbidity, suspended load and fine sediment deposits on gravel bars along the River Inn as well as its biota (macroinvertebrates and fish) for "undisturbed" conditions. The SSCs were measured in a pre-analysis and during the drawdown itself in the penstock and in the outlet channel with

Taylor, Nancy L. Randall, Donald P. Bowen, John T. Johnson, Mary M. Roland, Vincent R. Matthews, Christine G. Gates, Raymond L. Skeens, Kristi M. Nolf, Scott R. Hammond, Dana P.

The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC.

The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

North Dakota State Board for Vocational Education, Bismarck.

This guide provides the basic foundation to develop a one-semester course based on the cluster concept, graphic communications. One of a set of six guides for an industrial arts curriculum at the junior high school level, it suggests exploratory experiences designed to (1) develop an awareness and understanding of the drafting and graphic arts…

The field of computer graphics is rapidly opening up to the graphic artist. It is not necessary to be a programming expert to enter this fascinating world. The capabilities of the medium are astounding: neon and metallic effects, translucent plastic and clear glass effects, sensitive 3-D shadings, limitless textures, and above all color. As with any medium, computer graphics has its advantages, such as speed, ease of form manipulation, and a variety of type fonts and alphabets. It also has its limitations, such as data input time, final output turnaround time, and not necessarily being the right medium for themore » job at hand. And finally, it is the time- and cost-saving characteristics of computer-generated visuals, opposed to original artwork, that make computer graphics a viable alternative. This paper focuses on parts of the computer graphics system in use at the Los Alamos National Laboratory to provide specific examples.« less

There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics , new software to create graphics , interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics . There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

Booth, E. Steven, L. I. Bart, D.

Calcareous fens are unique and often isolated ecosystems of high conservation value worldwide because they provide habitat for many rare plant and animal species. Their identity is inextricably linked to an absolute dependence on a consistent discharge of groundwater that saturates the near surface for most of the growing season leading to the accumulation of carbon as peat or tufa and sequestration of nutrients. The stresses resulting from consistent saturation and low-nutrient availability result in high native plant diversity including very high rare species richness compared to other ecosystems. Decreases in the saturation stress by reduced groundwater inputs (e.g., from nearby pumping) can result in losses of native diversity, decreases in rare-species abundance, and increased invasion by non-native species. As such, fen ecosystems are particularly susceptible to changes in groundwater conditions including reduction in water levels due to nearby groundwater pumping. Trajectories of degradation are complex due to feedbacks between loss of soil organic carbon, changes in soil properties, and plant water use. We present a model of an archetype fen that couples a hydrological niche model with a variably-saturated groundwater flow model to predict changes in vegetation composition in response to different groundwater drawdown scenarios (step change, declining trend, and periodic drawdown during dry periods). The model also includes feedbacks among vegetation composition, plant water use, and soil properties. The hydrological niche models (using surface soil moisture as predictor) and relationships between vegetation composition, plant water use (via stomatal conductance and leaf-area index), and soil hydraulic properties (van Genuchten parameters) were determined based on data collected from six fens in Wisconsin under various states of degradation. Results reveal a complex response to drawdown and provide insight into other ecosystems with linkages between the

Cook, George E. Sztipanovits, Janos Biegl, Csaba Karsai, Gabor Springfield, James F.

The objective of this research was twofold. First, the basic capabilities of ROBOSIM ( graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

Alaskan RTMA Graphics This page displays Alaskan Real-Time Mesoscale Analyses and compares them to DISCLAIMER: The Alaskan Real-Time Mesoscale Analysis tool is in its developmental stage, and there is much

This presentation will include information about Graphics Processor Units (GPUs) technology, NASA Electronic Parts and Packaging (NEPP) tasks, The test setup, test parameter considerations, lessons learned, collaborations, a roadmap, NEPP partners, results to date, and future plans.

The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

Barlow, Paul M. Moench, Allen F.

The computer program WTAQ calculates hydraulic-head drawdowns in a confined or water-table aquifer that result from pumping at a well of finite or infinitesimal diameter. The program is based on an analytical model of axial-symmetric ground-water flow in a homogeneous and anisotropic aquifer. The program allows for well-bore storage and well-bore skin at the pumped well and for delayed drawdown response at an observation well by including these factors, it is possible to accurately evaluate the specific storage of a water-table aquifer from early-time drawdown data in observation wells and piezometers. For water-table aquifers, the program allows for either delayed or instantaneous drainage from the unsaturated zone. WTAQ calculates dimensionless or dimensional theoretical drawdowns that can be used with measured drawdowns at observation points to estimate the hydraulic properties of confined and water-table aquifers. Three sample problems illustrate use of WTAQ for estimating horizontal and vertical hydraulic conductivity, specific storage, and specific yield of a water-table aquifer by type-curve methods and by an automatic parameter-estimation method.

Christ, John A. Goltz, Mark N.

Pump-and-treat systems that are installed to contain contaminated groundwater migration typically involve placement of extraction wells perpendicular to the regional groundwater flow direction at the down gradient edge of a contaminant plume. These wells capture contaminated water for above ground treatment and disposal, thereby preventing further migration of contaminated water down gradient. In this work, examining two-, three-, and four-well systems, we compare well configurations that are parallel and perpendicular to the regional groundwater flow direction. We show that orienting extraction wells co-linearly, parallel to regional flow, results in (1) a larger area of aquifer influenced by the wells at a given total well flow rate, (2) a center and ultimate capture zone width equal to the perpendicular configuration, and (3) more flexibility with regard to minimizing drawdown . Although not suited for some scenarios, we found orienting extraction wells parallel to regional flow along a plume centerline, when compared to a perpendicular configuration, reduces drawdown by up to 7% and minimizes the fraction of uncontaminated water captured.

Howard, Rebecca J. Wells, Christopher J.

Felsenthal Navigation Pool (?the pool?) at Felsenthal National Wildlife Refuge near Crossett, Ark., was continuously flooded to a baseline elevation of 19.8 m (65.0 ft) mean sea level (m.s.l.) from late fall 1985, when the final in a series of locks and dams was constructed, until the summer of 1995. Water level within the pool was reduced by 0.3 m (1.0 ft) beginning July 5, 1995, exposing about 1,591 ha (3,931 acres) of sediment the reduced water level was maintained until October 25 of that year. A total of 15 transects was established along the pool margin before the drawdown , extending perpendicular from the pool edge to 19.5 m (64.0 ft) in elevation. Plant species composition and cover were recorded at six to seven quadrats on each transect 14 of the transects were also monitored three times during the drawdown and in June 1996. Soil near five of the original transects was disturbed two weeks into the drawdown by scraping the soil surface with a bulldozer. Soil cores were collected to characterize soil organic matter, texture class, carbon and nitrogen content, and plant nutrient concentrations soil samples were also collected to identify species present in the seed bank prior to and during the drawdown . Plant species, several of which were high quality food sources for waterfowl, colonized the drawdown zone within four weeks. Vegetation response, measured by species richness, total cover, and cover of Cyperus species, was often greater at low compared to high elevations in the drawdown zone this effect was probably intensified by low rainfall during the summer of 1995. Vegetation response on the disturbed transects was reduced compared to that on the undisturbed transects. This effect was attributed to two factors: (1) removal of the existing seed bank by the disturbance technique applied and (2) reduced incorporation of seeds recruited during the drawdown because of unusually low summer rainfall. Seed bank studies demonstrated that several species

Unstad, Kody M. Uden, Daniel R. Allen, Craig R. Chaine, Noelle M. Haak, Danielle M. Kill, Robert A. Pope, Kevin L. Stephen, Bruce J. Wong, Alec

Nonnative invasive mollusks degrade aquatic ecosystems and induce economic losses worldwide. Extended air exposure through water body drawdown is one management action used for control. In North America, the Chinese mystery snail (Bellamya chinensis) is an invasive aquatic snail with an expanding range, but eradication methods for this species are not well documented. We assessed the ability of B. chinensis to survive different durations of air exposure, and observed behavioral responses prior to, during, and following desiccation events. Individual B. chinensis specimens survived air exposure in a laboratory setting for > 9 weeks, and survivorship was greater among adults than juveniles. Several B. chinensis specimens responded to desiccation by sealing their opercula and/or burrowing in mud substrate. Our results indicate that drawdowns alone may not be an effective means of eliminating B. chinensis. This study lays the groundwork for future management research that may determine the effectiveness of drawdowns when combined with factors such as extreme temperatures, predation, or molluscicides.

A discussion of the modular program Mikado is presented. Mikado was developed with the goal of creating a flexible graphic tool to display and help analyze the results of finite element fluid flow computations. Mikado works on unstructured meshes, with elements of mixed geometric type, but also offers the possibility of using structured meshes. The program can be operated by both menu and mouse (interactive), or by command file (batch). Mikado is written in FORTRAN, except for a few system dependent subroutines which are in C. It runs presently on Silicon Graphics ' workstations and could be easily ported to the IBM-RISC System/6000 family of workstations.

Krawczyk, Wiesława Ewa Bartoszewski, Stefan A.

SummarySolute fluxes and transient carbon dioxide drawdown in a small glacierized basin investigated on Svalbard in 2002 are presented. It was a sample year within a period of significant climate warming in the Arctic. Discharge was recorded in the Scottbreen Basin (10.1 km 2), Bellsund Fjord, between July 8 and September 10, 2002. Specific runoff for this period was 0.784 m, 22% more than the mean for 1986-2001. The runoff for all of 2002 (i.e. the hydrologic year) was estimated by comparison with Bayelva, the only glacial river with longer records on Svalbard. The specific runoff for 2002 was ˜1.228 m, yielding crustal solute fluxes of 69.4 t km -2 yr -1 (25.8 m 3 km -2 yr -1). This rate is the highest chemical denudation rate reported from glacierized basins on Svalbard, and it may be underestimated because higher solute fluxes at the beginning of the melt season were not taken into account. Crustal fluxes in the fall may also have been higher because it is probable that crustal ion concentrations were increasing after recording stopped in September. The cation denudation rate was 1213 ∑ meq + m -2 yr -1 and the mean annual crustal ion concentration derived from it amounted to 981 μeq L -1. Transient CO 2 drawdown in 2002 was 5242 kg C km -2 yr -1. Most of the carbon dioxide was removed in the summer ablation waters, estimated CO 2 drawdown in the fall being only 13% of the total. Comparison with crustal solute fluxes (CSF) computed from specific conductivity in the 1980s and 1990s suggests that earlier fluxes may have been overestimated by around 19%. Comparing earlier data with the 2002 rates may confirm the influence of climate warming on increasing chemical denudation rates. It was also found that a globally derived equation relating specific conductivity to concentrations of dissolved limestone in water gave estimates of the crustal solute fluxes that were only 1.1% less than those obtained via comprehensive chemical analyses of waters and ion

This document presents a discussion of the development of a set of software tools to assist in the construction of interfaces to simulations and real-time systems. Presuppositions to the approach to interface design that was used are surveyed, the tools are described, and the conclusions drawn from these experiences in graphical interface design…

Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

Grimsrud, Anders Stephenson, Michael B.

The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

NAM Smart Init Graphics This page displays 5km NAM forecast output made from the "smartinit DISCLAIMER: The Smart Init tool is in its developmental stage, and there is much work to be done. Feedback is

As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…

Minogue, James Wiebe, Eric Madden, Lauren Bedward, John Carter, Mike

A common mode of communication in the elementary classroom is the science notebook. In this article, the authors outline the ways in which " graphically enhanced science notebooks" can help engage students in complete and robust inquiry. Central to this approach is deliberate attention to the efficient and effective use of student-generated…

Well efficiency decreases with time after development and the pumping rate is reduced sharply at a certain point. However, the rapid decrease of the efficiency definitely depends upon the physical characteristics of the aquifer, chemical properties of groundwater, pore clogging by adsorptive/precipitable materials, and use of groundwater well. In general, it is expected that an adequate and ongoing maintenance for the well is effective in extension of operating periods because major maintenance frequency requirement at municipal wells placed in the crystalline rock aquifer is known to be relatively longer. The proportion of agricultural wells (583,748) against the total groundwater ones (1,380,715) is 42.3% in 2011, S. Korea. Groundwater use accounts for 1.9 billion m3/year which indicates 48.9% of total amount available groundwater resources. Approximate 69% of the total agricultural public wells placed in crystalline rock aquifer have passed more than 10 years after development. In this study, the increase of well efficiency before and after the well disinfection/cleaning for agricultural groundwater wells in the mountains, plains, and coastal aquifer with the data of step- drawdown test was evaluated, respectively. With the concept of critical yield, the increase of available amount of groundwater was quantitatively analyzed after treatment. From the results, well efficiency increased approximately 1.5 to 4 times depending on pumping rate when the proper disinfection/cleaning methods to the wells were applied. In addition, it showed that the pumping rate of approximate 4-8% with the critical yield from step- drawdown test increased and these effects were the highest in wells which are more than 10 years elapsed. Therefore, it would be concluded that the well disinfection/cleaning methods for the purpose of increasing the efficiency are more effective for the wells that are older than 10 years.

Perkins, D. B. Min, J. Jawitz, J. W.

Restoration of ditched and drained wetlands in the Lake Okeechobee basin, Florida, USA is currently under study for possible amelioration of anthropogenic phosphorus enrichment of the lake. To date most research in this area has focused on the biogeochemical role of these wetlands. Here we focus on the dynamic hydrology of these systems and the resulting control on biogeochemical cycling. Four depressional wetlands in the basin were monitored for approximately three years to understand the interaction between wetland surface water and adjacent upland groundwater system. A coupled hydrologic-biogeochemical model was created to evaluate restoration scenarios. Determining wetland-scale hydraulic conductivity was an important aspect of the hydrologic model. Based on natural drawdown events observed at wetland-upland well pairs, hydraulic conductivities of top sandy soil layers surrounding the isolated wetlands were calculated using the Dupuit equation under a constrained water budget framework. The drawdown -based hydraulic conductivity estimates of 1.1 to 18.7 m/d (geometric mean of 4.8 m/d) were about three times greater than slug test- based values (1.5 ± 1.1 m/d), which is consistent with scale-dependent expectations. Model-based net groundwater recharge rate at each depressional wetland was predicted based on the estimated hydraulic conductivities, which corresponded to 50 to 72% of rainfall in the same period. These variances appeared to be due to the relative difference of ditch bottom elevation controlling the surface runoff as well as the spatial heterogeneity of the sandy aquifer. Results from this study have implications for nutrient loads to Lake Okeechobee via groundwater as well as water quality monitoring and management strategies aimed to reduce solute export (especially P) from the upstream catchment area to Lake Okeechobee.

Newton, Teresa J. Zigler, Steven J. Gray, Brian R.

Collectively, these data suggest that drawdowns can influence the mortality, movement and behaviour of mussels in the UMR. However, more information on spatial and temporal distributions of mussels is needed to better understand the magnitude of these effects. Results from this study are being used by resource managers to better evaluate the effects of this management tool on native mussel assemblages.

This study proposes a generalized Darcy's law with considering phase lags in both the water flux and drawdown gradient to develop a lagging flow model for describing drawdown induced by constant-rate pumping (CRP) in a leaky confined aquifer. The present model has a mathematical formulation similar to the dual-porosity model. The Laplace-domain solution of the model with the effect of wellbore storage is derived by the Laplace transform method. The time-domain solution for the case of neglecting the wellbore storage and well radius is developed by the use of Laplace transform and Weber transform. The results of sensitivity analysis based on the solution indicate that the drawdown is very sensitive to the change in each of the transmissivity and storativity. Also, a study for the lagging effect on the drawdown indicates that its influence is significant associated with the lag times. The present solution is also employed to analyze a data set taken from a CRP test conducted in a fractured aquifer in South Dakota, USA. The results show the prediction of this new solution with considering the phase lags has very good fit to the field data, especially at early pumping time. In addition, the phase lags seem to have a scale effect as indicated in the results. In other words, the lagging behavior is positively correlated with the observed distance in the Madison aquifer.

. 3 The President 1 2012-01-01 2012-01-01 false Drawdown Pursuant to Section 552(c)(2) of the Foreign Assistance Act of 1961, as Amended, of up to $25 Million in Commodities and Services from any Agency of the United States Government for Libyan Groups, such as the Transitional National Council, To Support Efforts To Protect Civilians and.

The groundwater-flow system of the Virginia Coastal Plain consists of areally extensive and interconnected aquifers. Large, regionally coalescing cones of depression that are caused by large withdrawals of water are found in these aquifers. Local groundwater systems are affected by regional pumping, because of the interactions within the system of aquifers. Accordingly, these local systems are affected by regional groundwater flow and by spatial and temporal differences in withdrawals by various users. A geographic- information system was used to refine a regional groundwater-flow model around selected withdrawal centers. A method was developed in which drawdown maps that were simulated by the regional groundwater-flow model and the principle of superposition could be used to estimate drawdown at local sites. The method was applied to create drawdown maps in the Brightseat/Upper Potomac Aquifer for periods of 3, 6, 9, and 12 months for Chesapeake, Newport News, Norfolk, Portsmouth, Suffolk, and Virginia Beach, Virginia. Withdrawal rates were supplied by the individual localities and remained constant for each simulation period. This provides an efficient method by which the individual local groundwater users can determine the amount of drawdown produced by their wells in a groundwater system that is a water source for multiple users and that is affected by regional-flow systems.

Morris, John H. Kuchinsky, Allan Ferrin, Thomas E. Pico, Alexander R.

enhanced Graphics ( http://apps.cytoscape.org/apps/enhanced Graphics ) is a Cytoscape app that implements a series of enhanced charts and graphics that may be added to Cytoscape nodes. It enables users and other app developers to create pie, line, bar, and circle plots that are driven by columns in the Cytoscape Node Table. Charts are drawn using vector graphics to allow full-resolution scaling. PMID:25285206

Month: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Day: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 first prev next last SMARTINIT Verification NAM VS NEST Graphics MMB web site Model Type: PROD vs PARA DNG Plot Type: Comparison maps Difference plots Region: CONUS Nest 2.5 km South West U.S. New York, NY

It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

In 1833 John Herschel published an account of his graphical method for determining the orbits of double stars. He had hoped to be the first to determine such orbits, but Felix Savary in France and Johann Franz Encke in Germany beat him to the punch using analytical methods. Herschel was convinced, however, that his graphical method was much superior to analytical methods, because it used the judgment of the hand and eye to correct the inevitable errors of observation. Line graphs of the kind used by Herschel became common only in the 1830s, so Herschel was introducing a new method. He also found computation fatiguing and devised a "wheeled machine" to help him out. Encke was skeptical of Herschel's methods. He said that he lived for calculation and that the English would be better astronomers if they calculated more. It is difficult to believe that the entire Scientific Revolution of the 17th century took place without graphs and that only a few examples appeared in the 18th century. Herschel promoted the use of graphs, not only in astronomy, but also in the study of meteorology and terrestrial magnetism. Because he was the most prominent scientist in England, Herschel's advocacy greatly advanced graphical methods.

Bon, Bruce Seraji, Homayoun

Rover Graphical Simulator (RGS) is a package of software that generates images of the motion of a wheeled robotic exploratory vehicle (rover) across terrain that includes obstacles and regions of varying traversability. The simulated rover moves autonomously, utilizing reasoning and decision-making capabilities of a fuzzy-logic navigation strategy to choose its path from an initial to a final state. RGS provides a graphical user interface for control and monitoring of simulations. The numerically simulated motion is represented as discrete steps with a constant time interval between updates. At each simulation step, a dot is placed at the old rover position and a graphical symbol representing the rover is redrawn at the new, updated position. The effect is to leave a trail of dots depicting the path traversed by the rover, the distances between dots being proportional to the local speed. Obstacles and regions of low traversability are depicted as filled circles, with buffer zones around them indicated by enclosing circles. The simulated robot is equipped with onboard sensors that can detect regional terrain traversability and local obstacles out to specified ranges. RGS won the NASA Group Achievement Award in 2002.

Rudiger, Hollis Margaret Schliesman, Megan

School libraries serving children and teenagers today should be committed to collecting graphic novels to the extent that their budgets allow. However, the term " graphic novel" is enough to make some librarians--not to mention administrators and parents--pause. Graphic novels are simply book-length comics. They can be works of fiction or…

This manual describes the CALM TV graphics interface, a low-cost means of producing quality graphics on an ordinary TV. The system permits the output of data in graphic as well as alphanumeric form and the input of data from the face of the TV using a light pen. The integrated circuits required in the interface can be obtained from standard…

Progress is reported concerning the use of computer controlled graphical displays in the areas of radiaton diffusion and hydrodynamics, general. ventricular dynamics. Progress is continuing on the use of computer graphics in architecture. Some progress in halftone graphics is reported with no basic. developments presented. Colored halftone perspective pictures are being used to represent multivariable situations. Nonlinear waveform processing is

The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.

EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of EASI Graphics and illustrates its application with some examples.

Keith, M. K. Wallick, R. Bangs, B. L. Taylor, G. Gordon, G. W. White, J. S. Mangano, J.

Reservoir drawdowns at Fall Creek Lake, Oregon lower lake levels to facilitate downstream passage of juvenile spring Chinook salmon through the 55-m high dam. Since 2011, annual fall and winter drawdowns have improved fish passage, but temporarily lowering the lake nearly to streambed has increased downstream transport of predominantly fine ( drawdown and are colonized with vegetation, such as reed canary grass, thereby increasing the trapping efficiency for fine sediment during the following year's drawdown . Fine sediment accumulation in off-channel areas has reduced the available rearing area for some salmonid species but may provide alternative habitat suitable for other native aquatic species such as Pacific lamprey ammocoetes that live in fine substrates for several years. Changes in off-channel aquatic habitat and bare gravel bars related to the drawdowns are small relative to the historically dynamic conditions on the Middle Fork (presently stable). Fall Creek, historically and presently stable, has fewer off-channel areas than the Middle Fork, so filling those areas has greater reach-scale impacts on habitat. Locally, deposition measured following the 2015 drawdown showed most aggradation on high-elevation gravel bars and low-elevation floodplains occurred when flows were higher on Fall Creek ( 2,000 ft3/s) and the Middle Fork (near bankfull

Straková, Petra Anttila, Jani Spetz, Peter Kitunen, Veikko Tapanila, Tarja Laiho, Raija

There is increasing evidence that changes in the species composition and structure of plant communities induced by global change will have much more impact on plant-mediated carbon cycling than any phenotypic responses. These impacts are largely mediated by shifts in litter quality. There are few documentations of these changes so far, due to the relatively long time scale required for their direct observation. Here, we examine the changes in litter inputs induced by persistent water-level drawdown in boreal peatland sites. Peatlands contain a major proportion of the terrestrial carbon pool, and it is thus important to be able to predict their behaviour and role in the global C cycle under different global change factors. We studied the effects of short-term (ca. 4 years) and long-term (ca. 40 years) persistent water level (WL) drawdown on the quantity and chemical quality of above-ground plant litter inputs at three sites: bog, oligotrophic fen and mesotrophic fen. The parameters used to characterize litter quality included various extractable substances, cellulose, holocellulose, composition of hemicellulose (neutral sugars, uronic acids), lignin, CuO oxidation phenolic products, and concentrations of C, nitrogen (N), phosphorus (P), potassium, magnesium, manganese and calcium. Four different groups of litter were clearly distinct based on their chemical quality: foliar litters, graminoids, mosses and woody litters. The pristine conditions were characterized by Sphagnum moss and graminoid litter. Following short-term WL drawdown , changes in the quality and quantity of litter inputs were small. Following long-term WL drawdown , total litter inputs dramatically increased, due to increased tree litter inputs, and the litter type composition greatly changed. These changes resulted in annual inputs of 1901-2010 kg•ha-1 C, 22-24 kg•ha-1 N, 1.5-2.2 kg•ha-1 P, 967-1235 kg•ha-1 lignin and lignin-like compounds and 254-300 kg•ha-1 water solubles after long-term WL

Johnson, Kenneth S. Plant, Joshua N. Dunne, John P. Talley, Lynne D. Sarmiento, Jorge L.

Annual nitrate cycles have been measured throughout the pelagic waters of the Southern Ocean, including regions with seasonal ice cover and southern hemisphere subtropical zones. Vertically resolved nitrate measurements were made using in situ ultraviolet spectrophotometer (ISUS) and submersible ultraviolet nitrate analyzer (SUNA) optical nitrate sensors deployed on profiling floats. Thirty-one floats returned 40 complete annual cycles. The mean nitrate profile from the month with the highest winter nitrate minus the mean profile from the month with the lowest nitrate yields the annual nitrate drawdown . This quantity was integrated to 200 m depth and converted to carbon using the Redfield ratio to estimate annual net community production (ANCP) throughout the Southern Ocean south of 30°S. A well-defined, zonal mean distribution is found with highest values (3-4 mol C m-2 yr-1) from 40 to 50°S. Lowest values are found in the subtropics and in the seasonal ice zone. The area weighted mean was 2.9 mol C m-2 yr-1 for all regions south of 40°S. Cumulative ANCP south of 50°S is 1.3 Pg C yr-1. This represents about 13% of global ANCP in about 14% of the global ocean area. Plain Language SummaryThis manuscript reports on 40 annual cycles of nitrate observed by chemical sensors on SOCCOM profiling floats. The annual drawdown in nitrate concentration by phytoplankton is used to assess the spatial variability of annual net community production in the Southern Ocean. This ANCP is a key component of the global carbon cycle and it exerts an important control on atmospheric carbon dioxide. We show that the results are consistent with our prior understanding of Southern Ocean ANCP, which has required decades of observations to accumulate. The profiling floats now enable annual resolution of this key process. The results also highlight spatial variability in ANCP in the Southern Ocean.

Meyer, Patrick E. Jaap, John P.

NASA's Experiment Scheduling Program (ESP), which has been used for approximately 12 Spacelab missions, is being enhanced with the addition of a Graphical Timeline Editor. The GTE Clipboard, as it is called, was developed to demonstrate new technology which will lead the development of International Space Station Alpha's Payload Planning System and support the remaining Spacelab missions. ESP's GTE Clipboard is developed in C using MIT's X Windows System X11R5 and follows OSF/Motif Style Guide Revision 1.2.

Blumenthal, Benjamin J. Zhan, Hongbin

We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux 2) one side no-flux - one side constant-head 3) both sides constant-head 4) one side no-flux 5) one side constant-head 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

Gilbert, Matthew E. McElrone, Andrew J.

In agricultural and natural systems, diffuse light can enhance plant primary productivity due to deeper penetration into and greater irradiance of the entire canopy. However, for individual sun-grown leaves from three species, photosynthesis is actually less efficient under diffuse compared with direct light. Despite its potential impact on canopy-level productivity, the mechanism for this leaf-level diffuse light photosynthetic depression effect is unknown. Here, we investigate if the spatial distribution of light absorption relative to electron transport capacity in sun- and shade-grown sunflower (Helianthus annuus) leaves underlies its previously observed diffuse light photosynthetic depression. Using a new one-dimensional porous medium finite element gas-exchange model parameterized with light absorption profiles, we found that weaker penetration of diffuse versus direct light into the mesophyll of sun-grown sunflower leaves led to a more heterogenous saturation of electron transport capacity and lowered its CO2 concentration drawdown capacity in the intercellular airspace and chloroplast stroma. This decoupling of light availability from photosynthetic capacity under diffuse light is sufficient to generate an 11% decline in photosynthesis in sun-grown but not shade-grown leaves, primarily because thin shade-grown leaves similarly distribute diffuse and direct light throughout the mesophyll. Finally, we illustrate how diffuse light photosynthetic depression could overcome enhancement in canopies with low light extinction coefficients and/or leaf area, pointing toward a novel direction for future research. PMID:28432257

Earles, J Mason Théroux-Rancourt, Guillaume Gilbert, Matthew E McElrone, Andrew J Brodersen, Craig R

In agricultural and natural systems, diffuse light can enhance plant primary productivity due to deeper penetration into and greater irradiance of the entire canopy. However, for individual sun-grown leaves from three species, photosynthesis is actually less efficient under diffuse compared with direct light. Despite its potential impact on canopy-level productivity, the mechanism for this leaf-level diffuse light photosynthetic depression effect is unknown. Here, we investigate if the spatial distribution of light absorption relative to electron transport capacity in sun- and shade-grown sunflower ( Helianthus annuus ) leaves underlies its previously observed diffuse light photosynthetic depression. Using a new one-dimensional porous medium finite element gas-exchange model parameterized with light absorption profiles, we found that weaker penetration of diffuse versus direct light into the mesophyll of sun-grown sunflower leaves led to a more heterogenous saturation of electron transport capacity and lowered its CO 2 concentration drawdown capacity in the intercellular airspace and chloroplast stroma. This decoupling of light availability from photosynthetic capacity under diffuse light is sufficient to generate an 11% decline in photosynthesis in sun-grown but not shade-grown leaves, primarily because thin shade-grown leaves similarly distribute diffuse and direct light throughout the mesophyll. Finally, we illustrate how diffuse light photosynthetic depression could overcome enhancement in canopies with low light extinction coefficients and/or leaf area, pointing toward a novel direction for future research. © 2017 American Society of Plant Biologists. All Rights Reserved.

Fransner, Filippa Gustafsson, Erik Tedesco, Letizia Vichi, Marcello Hordoir, Robinson Roquet, Fabien Spilling, Kristian Kuznetsov, Ivan Eilola, Kari Mörth, Carl-Magnus Humborg, Christoph Nycander, Jonas

High inputs of nutrients and organic matter make coastal seas places of intense air-sea CO2 exchange. Due to their complexity, the role of coastal seas in the global air-sea CO2 exchange is, however, still uncertain. Here, we investigate the role of phytoplankton stoichiometric flexibility and extracellular DOC production for the seasonal nutrient and CO2 partial pressure (pCO2) dynamics in the Gulf of Bothnia, Northern Baltic Sea. A 3-D ocean biogeochemical-physical model with variable phytoplankton stoichiometry is for the first time implemented in the area and validated against observations. By simulating non-Redfieldian internal phytoplankton stoichiometry, and a relatively large production of extracellular dissolved organic carbon (DOC), the model adequately reproduces observed seasonal cycles in macronutrients and pCO2. The uptake of atmospheric CO2 is underestimated by 50% if instead using the Redfield ratio to determine the carbon assimilation, as in other Baltic Sea models currently in use. The model further suggests, based on the observed drawdown of pCO2, that observational estimates of organic carbon production in the Gulf of Bothnia, derived with the 14C method, may be heavily underestimated. We conclude that stoichiometric variability and uncoupling of carbon and nutrient assimilation have to be considered in order to better understand the carbon cycle in coastal seas.

Ramos, Gustavo Carrera, Jesus Gómez, Susana Minutti, Carlos Camacho, Rodolfo

Pumping tests interpretation is an art that involves dealing with noise coming from multiple sources and conceptual model uncertainty. Interpretation is greatly helped by diagnostic plots, which include drawdown data and their derivative with respect to log-time, called log-derivative. Log-derivatives are especially useful to complement geological understanding in helping to identify the underlying model of fluid flow because they are sensitive to subtle variations in the response to pumping of aquifers and oil reservoirs. The main problem with their use lies in the calculation of the log-derivatives themselves, which may display fluctuations when data are noisy. To overcome this difficulty, we propose a variational regularization approach based on the minimization of a functional consisting of two terms: one ensuring that the computed log-derivatives honor measurements and one that penalizes fluctuations. The minimization leads to a diffusion-like differential equation in the log-derivatives, and boundary conditions that are appropriate for well hydraulics (i.e., radial flow, wellbore storage, fractal behavior, etc.). We have solved this equation by finite differences. We tested the methodology on two synthetic examples showing that a robust solution is obtained. We also report the resulting log-derivative for a real case.

Due to the significant and increasing demand for groundwater in Texas, evaluations of groundwater pumping impacts play an important role in water planning and management. Since 1951, Groundwater Conservation Districts (GCDs) have managed aquifers across much of the state. Among their functions, GCDs can regulate the spacing of new wells from existing wells, but they must balance a landowner's ability to drill a new well with expected impacts to existing wells. We performed studies for three GCDs to provide, based on representative hydraulic properties, the expected impacts of different well spacing and production rate relationships. This was done with the analytic element groundwater modeling code TTIM. These evaluations account for drawdown caused by a single well and cumulative drawdown by many wells. The results consist of a series of plots that allow decision-makers and GCDs representatives to understand the impacts of potential well spacing rule options on existing and prospective well owners.

Howard, Rebecca J. Allain, Larry

Disturbance is an important natural process in the creation and maintenance of wetlands. Water depth manipulation and prescribed fire are two types of disturbance commonly used by humans to influence vegetation succession and composition in wetlands with the intention of improving wildlife habitat value. A 6,475-hectare (ha) impoundment was constructed in 1943 on Lacassine National Wildlife Refuge in southwest Louisiana to create freshwater wetlands as wintering waterfowl habitat. Ten years after construction of the impoundment, called Lacassine pool, was completed, refuge staff began expressing concerns about increasing emergent vegetation cover, organic matter accumulation, and decreasing area of open water within the pool. Because the presence of permanent standing water impedes actions that can address these concerns, a small impoundment within the pool where it was possible to manipulate water depth was created. The 283-ha subimpoundment called Unit D was constructed in 1989. Water was pumped from Unit D in 1990, and the unit was permanently reflooded about 3 years later. Four prescribed fires were applied during the drawdown . A study was initiated in 1990 to investigate the effect of the experimental drawdown on vegetation and soils in Unit D. Four plant community types were described, and cores were collected to measure the depth of the soil organic layer. A second study of Unit D was conducted in 1997, 4 years after the unit was reflooded, by using the same plots and similar sampling methods. This report presents an analysis and synthesis of the data from the two studies and provides an evaluation of the impact of the management techniques applied. We found that plant community characteristics often differed among the four communities and varied with time. Species richness increased in two of the communities, and total aboveground biomass increased in all four during the drawdown . These changes, however, did not persist when Unit D was reflooded by 1997

Berg, Steven J. Illman, Walter A.

SummaryInterpretation of pumping tests in unconfined aquifers has largely been based on analytical solutions that disregard aquifer heterogeneity. In this study, we investigate whether the prediction of drawdown responses in a heterogeneous unconfined aquifer and the unsaturated zone above it with a variably saturated groundwater flow model can be improved by including information on hydraulic conductivity (K) and specific storage (Ss) from transient hydraulic tomography (THT). We also investigate whether these predictions are affected by the use of unsaturated flow parameters estimated through laboratory hanging column experiments or calibration of in situ drainage curves. To investigate these issues, we designed and conducted laboratory sandbox experiments to characterize the saturated and unsaturated properties of a heterogeneous unconfined aquifer. Specifically, we conducted pumping tests under fully saturated conditions and interpreted the drawdown responses by treating the medium to be either homogeneous or heterogeneous. We then conducted another pumping test and allowed the water table to drop, similar to a pumping test in an unconfined aquifer. Simulations conducted using a variably saturated flow model revealed: (1) homogeneous parameters in the saturated and unsaturated zones have a difficult time predicting the responses of the heterogeneous unconfined aquifer (2) heterogeneous saturated hydraulic parameter distributions obtained via THT yielded significantly improved drawdown predictions in the saturated zone of the unconfined aquifer and (3) considering heterogeneity of unsaturated zone parameters produced a minor improvement in predictions in the unsaturated zone, but not the saturated zone. These results seem to support the finding by Mao et al. (2011) that spatial variability in the unsaturated zone plays a minor role in the formation of the S-shape drawdown -time curve observed during pumping in an unconfined aquifer.

The vertical distribution of hydraulic conductivity in layered aquifer systems commonly is needed for model simulations of ground-water flow and transport. In previous studies, time- drawdown data or flowmeter data were used individually, but not in combination, to estimate hydraulic conductivity. In this study, flowmeter data and time- drawdown data collected from a long-screened production well and nearby monitoring wells are combined to estimate the vertical distribution of hydraulic conductivity in a complex multilayer coastal aquifer system. Flowmeter measurements recorded as a function of depth delineate nonuniform inflow to the wellbore, and this information is used to better discretize the vertical distribution of hydraulic conductivity using analytical and numerical methods. The time- drawdown data complement the flowmeter data by giving insight into the hydraulic response of aquitards when flow rates within the wellbore are below the detection limit of the flowmeter. The combination of these field data allows for the testing of alternative conceptual models of radial flow to the wellbore.

Stumpp, Christine Hose, Grant C.

The abstraction of groundwater is a global phenomenon that directly threatens groundwater ecosystems. Despite the global significance of this issue, the impact of groundwater abstraction and the lowering of groundwater tables on biota is poorly known. The aim of this study is to determine the impacts of groundwater drawdown in unconfined aquifers on the distribution of fauna close to the water table, and the tolerance of groundwater fauna to sediment drying once water levels have declined. A series of column experiments were conducted to investigate the depth distribution of different stygofauna (Syncarida and Copepoda) under saturated conditions and after fast and slow water table declines. Further, the survival of stygofauna under conditions of reduced sediment water content was tested. The distribution and response of stygofauna to water drawdown was taxon specific, but with the common response of some fauna being stranded by water level decline. So too, the survival of stygofauna under different levels of sediment saturation was variable. Syncarida were better able to tolerate drying conditions than the Copepoda, but mortality of all groups increased with decreasing sediment water content. The results of this work provide new understanding of the response of fauna to water table drawdown . Such improved understanding is necessary for sustainable use of groundwater, and allows for targeted strategies to better manage groundwater abstraction and maintain groundwater biodiversity. PMID:24278111

Li, Zhe Zhang, Zengyu Lin, Chuxue Chen, Yongbo Wen, Anbang Fang, Fang

The Three Gorges Reservoir (TGR) in China has large water level variations, creating about 393 km(2) of drawdown area seasonally. Farming practices in drawdown area during the low water level period is common in the TGR. Field experiments on soil-air greenhouse gas (GHG) emissions in fallow grassland, peanut field and corn field in reservoir drawdown area at Lijiaba Bay of the Pengxi River, a tributary of the Yangtze River in the TGR were carried out from March through September 2011. Experimental fields in drawdown area had the same land use history. They were adjacent to each other horizontally at a narrow range of elevation i.e. 167-169 m, which assured that they had the same duration of reservoir inundation. Unflooded grassland with the same land-use history was selected as control for study. Results showed that mean value of soil CO2 emissions in drawdown area was 10.38 ± 0.97 mmol m(-2) h(-1). The corresponding CH4 fluxes and N2O fluxes were -8.61 ± 2.15 μmol m(-2) h(-1) and 3.42 ± 0.80 μmol m(-2) h(-1). Significant differences and monthly variations among land uses in treatments of drawdown area and unflooded grassland were evident. These were impacted by the change in soil physiochemical properties which were alerted by reservoir operation and farming. Particularly, N-fertilization in corn field stimulated N2O emissions from March to May. In terms of global warming potentials (GWP), corn field in drawdown area had the maximum GWP mainly due to N-fertilization. Gross GWP in peanut field in drawdown area was about 7% lower than that in fallow grassland. Compared to unflooded grassland, reservoir operation created positive net effect on GHG emissions and GWPs in drawdown area. However, selection of crop species, e.g. peanut, and best practices in farming, e.g. prohibiting N-fertilization, could potentially mitigate GWPs in drawdown area. In the net GHG emissions evaluation in the TGR, farming practices in the drawdown area shall be taken

In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features.

This library is a set of subroutines designed for vector plotting to CRT's, plotters, dot matrix, and laser printers. LONGLIB subroutines are invoked by program calls similar to standard CALCOMP routines. In addition to the basic plotting routines, LONGLIB contains an extensive set of routines to allow viewport clipping, extended character sets, graphic input, shading, polar plots, and 3-D plotting with or without hidden line removal. LONGLIB capabilities include surface plots, contours, histograms, logarithm axes, world maps, and seismic plots. LONGLIB includes master subroutines, which are self-contained series of commonly used individual subroutines. When invoked, the master routine will initialize the plotting package, and will plot multiple curves, scatter plots, log plots, 3-D plots, etc. and then close the plot package, all with a single call. Supported devices include VT100 equipped with Selanar GR100 or GR100+ boards, VT125s, VT240s, VT220 equipped with Selanar SG220, Tektronix 4010/4014 or 4107/4109 and compatibles, and Graphon GO-235 terminals. Dot matrix printer output is available by using the provided raster scan conversion routines for DEC LA50, Printronix printers, and high or low resolution Trilog printers. Other output devices include QMS laser printers, Postscript compatible laser printers, and HPGL compatible plotters. The LONGLIB package includes the graphics library source code, an on-line help library, scan converter and meta file conversion programs, and command files for installing, creating, and testing the library. The latest version, 5.0, is significantly enhanced and has been made more portable. Also, the new version's meta file format has been changed and is incompatible with previous versions. A conversion utility is included to port the old meta files to the new format. Color terminal plotting has been incorporated. LONGLIB is written in FORTRAN 77 for batch or interactive execution and has been implemented on a DEC VAX series

The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

Graly, Joseph A. Drever, James I. Humphrey, Neil F.

In order to constrain CO2 fluxes from biogeochemical processes in subglacial environments, we model the evolution of pH and alkalinity over a range of subglacial weathering conditions. We show that subglacial waters reach or exceed atmospheric pCO2 levels when atmospheric gases are able to partially access the subglacial environment. Subsequently, closed system oxidation of sulfides is capable of producing pCO2 levels well in excess of atmosphere levels without any input from the decay of organic matter. We compared this model to published pH and alkalinity measurements from 21 glaciers and ice sheets. Most subglacial waters are near atmospheric pCO2 values. The assumption of an initial period of open system weathering requires substantial organic carbon oxidation in only 4 of the 21 analyzed ice bodies. If the subglacial environment is assumed to be closed from any input of atmospheric gas, large organic carbon inputs are required in nearly all cases. These closed system assumptions imply that order of 10 g m-2 y-1 of organic carbon are removed from a typical subglacial environment—a rate too high to represent soil carbon built up over previous interglacial periods and far in excess of fluxes of surface deposited organic carbon. Partial open system input of atmospheric gases is therefore likely in most subglacial environments. The decay of organic carbon is still important to subglacial inorganic chemistry where substantial reserves of ancient organic carbon are found in bedrock. In glaciers and ice sheets on silicate bedrock, substantial long-term drawdown of atmospheric CO2 occurs.

Speelman, E N Van Kempen, M M L Barke, J Brinkhuis, H Reichart, G J Smolders, A J P Roelofs, J G M Sangiorgi, F de Leeuw, J W Lotter, A F Sinninghe Damsté, J S

Enormous quantities of the free-floating freshwater fern Azolla grew and reproduced in situ in the Arctic Ocean during the middle Eocene, as was demonstrated by microscopic analysis of microlaminated sediments recovered from the Lomonosov Ridge during Integrated Ocean Drilling Program (IODP) Expedition 302. The timing of the Azolla phase (approximately 48.5 Ma) coincides with the earliest signs of onset of the transition from a greenhouse towards the modern icehouse Earth. The sustained growth of Azolla, currently ranking among the fastest growing plants on Earth, in a major anoxic oceanic basin may have contributed to decreasing atmospheric pCO2 levels via burial of Azolla-derived organic matter. The consequences of these enormous Azolla blooms for regional and global nutrient and carbon cycles are still largely unknown. Cultivation experiments have been set up to investigate the influence of elevated pCO2 on Azolla growth, showing a marked increase in Azolla productivity under elevated (760 and 1910 ppm) pCO2 conditions. The combined results of organic carbon, sulphur, nitrogen content and 15N and 13C measurements of sediments from the Azolla interval illustrate the potential contribution of nitrogen fixation in a euxinic stratified Eocene Arctic. Flux calculations were used to quantitatively reconstruct the potential storage of carbon (0.9-3.5 10(18) gC) in the Arctic during the Azolla interval. It is estimated that storing 0.9 10(18) to 3.5 10(18) g carbon would result in a 55 to 470 ppm drawdown of pCO2 under Eocene conditions, indicating that the Arctic Azolla blooms may have had a significant effect on global atmospheric pCO2 levels through enhanced burial of organic matter.

Woith, Heiko Chiodini, Giovanni Mangiacapra, Annarita Wang, Rongjiang

The hydrothermal system beneath Campi Flegrei is strongly affected by sub-surface processes as manifested by a geothermal "plume" below Solfatara, associated with the formation of mud-pools (Fangaia), fumaroles (Bocca Grande, Pisciarelli), and thermal springs (Agnano). Within the frame of MED-SUV (The MED-SUV project has received funding from the European Union Seventh Framework Programme FP7 under Grant agreement no 308665), pressure transients in the hydrothermal system of Campi Flegrei are being continuously monitored at fumaroles, mudpools, hot springs, and geothermal wells. In total, waterlevel and temperature is recorded at 8 sites across the hydrothermal plume along a profile aligned between Agnano Termal in the East and Fangaia in the West. Autonomous devices are used to record the water level and water temperature at 10 minute intervals. At Fangaia mudpool water level and water temperature are dominantly controlled by rain water. Thus, the pool is refilled episodically. Contrary, the water level at a well producing hot water (82°C) for the Pisciarelli tennis club drops and recovers at nearly regular intervals. The induced water level changes are of the order of 1-2m and 3-4m in case of the mudpool and the hot-water-well, respectively. At first glance, both monitoring sites might seem to be fully useless to access natural changes in the Campi Flegrei fluid system. At a second thought, both timeseries provide a unique opportunity to monitor potential permeability changes in the aquifer system. A similar approach had been proposed to deduce earthquake-related permeability changes from Earth tide variations. Contrary to the indirect Earth tide approach, we have the chance to estimate the hydraulic aquifer properties from our monitoring data directly, since each time series contains a sequence of discrete hydraulic tests - namely drawdown tests and refill experiments. Although our Cooper-Jacob approach is really crude, we obtained reasonable permeability

Torfstein, Adi Goldstein, Steven L. Kushnir, Yochanan Enzel, Yehouda Haug, Gerald Stein, Mordechai

Sediment cores recovered by the Dead Sea Deep Drilling Project (DSDDP) from the deepest basin of the hypersaline, terminal Dead Sea (lake floor at ∼725 m below mean sea level) reveal the detailed climate history of the lake's watershed during the last interglacial period (Marine Isotope Stage 5 MIS5). The results document both a more intense aridity during MIS5 than during the Holocene, and the moderating impacts derived from the intense MIS5e African Monsoon. Early MIS5e (∼133-128 ka) was dominated by hyperarid conditions in the Eastern Mediterranean-Levant, indicated by thick halite deposition triggered by a lake-level drop. Halite deposition was interrupted however, during the MIS5e peak (∼128-122 ka) by sequences of flood deposits, which are coeval with the timing of the intense precession-forced African monsoon that generated Mediterranean sapropel S5. A subsequent weakening of this humidity source triggered extreme aridity in the Dead Sea watershed and resulting in the biggest known lake level drawdown in its history, reflected by the deposition of thick salt layers, and a capping pebble layer corresponding to a hiatus at ∼116-110 ka. The DSDDP core provides the first evidence for a direct association of the African monsoon with mid subtropical latitude climate systems effecting the Dead Sea watershed. Combined with coeval deposition of Arabia and southern Negev speleothems, Arava travertines, and calcification of Red Sea corals, the evidence points to a climatically wet corridor that could have facilitated homo sapiens migration "out of Africa" during the MIS5e peak. The hyperaridity documented during MIS5e may provide an important analogue for future warming of arid regions of the Eastern Mediterranean-Levant.

Roberts, Kathryn L. Brugar, Kristy A. Norman, Rebecca R.

In this article, we present the Graphical Rating Tool (GRT), which is designed to evaluate the graphical devices that are commonly found in content-area, non-fiction texts, in order to identify books that are well suited for teaching about those devices. We also present a "best of" list of science and social studies books, which includes…

Bork, Alfred M. Ballard, Richard

New, more versatile and inexpensive terminals will make computer graphics more feasible in science instruction than before. This paper describes the use of graphics in physics teaching at the University of California at Irvine. Commands and software are detailed in established programs, which include a lunar landing simulation and a program which…

AN EXPERIMENT WAS REPORTED WHICH DEMONSTRATES THAT GRAPHICS ARE MORE EFFECTIVE THAN SYMBOLS IN ACQUIRING ALGEBRA CONCEPTS. THE SECOND PHASE OF THE STUDY DEMONSTRATED THAT GRAPHICS IN HIGH SCHOOL TEXTBOOKS WERE RELIABLY CLASSIFIED IN A MATRIX OF 480 FUNCTIONAL STIMULUS-RESPONSE CATEGORIES. SUGGESTIONS WERE MADE FOR EXTENDING THE CLASSIFICATION…

Discusses the growing importance of the use of Graphic User Interfaces (GUIs) with microcomputers and online services. Highlights include the development of graphics interfacing with microcomputers CD-ROM databases an evaluation of HyperCard as a potential interface to electronic mail and online commercial databases and future possibilities.…

This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…

After a 1977 survey reflected the importance of graphics education for news students, a study was developed to investigate the state of graphics education in the whole field of journalism. A questionnaire was sent to professors and administrators in four print-oriented professional fields of education: magazine, advertising, public relations, and…

The purpose of this study was to determine whether graphic organizers serve as a better tool for comprehension assessment than traditional tests. Subjects, 16 seventh-grade learning disabled students, were given 8 weeks of instruction and assessments using both graphic organizer and linear note forms. Tests were graded, compared and contrasted to…

Research that has explored students' interpretations of graphical representations has not extended to include how students apply understanding of particular statistical concepts related to one graphical representation to interpret different representations. This paper reports on the way in which students' understanding of covariation, evidenced…

Wang, Qiang Yuan, Xingzhong Willison, J.H.Martin Zhang, Yuewei Liu, Hong

Hydrological alternation can dramatically influence riparian environments and shape riparian vegetation zonation. However, it was difficult to predict the status in the drawdown area of the Three Gorges Reservoir (TGR), because the hydrological regime created by the dam involves both short periods of summer flooding and long-term winter impoundment for half a year. In order to examine the effects of hydrological alternation on plant diversity and biomass in the drawdown area of TGR, twelve sites distributed along the length of the drawdown area of TGR were chosen to explore the lateral pattern of plant diversity and above-ground biomass at the ends of growing seasons in 2009 and 2010. We recorded 175 vascular plant species in 2009 and 127 in 2010, indicating that a significant loss of vascular flora in the drawdown area of TGR resulted from the new hydrological regimes. Cynodon dactylon and Cyperus rotundus had high tolerance to short periods of summer flooding and long-term winter flooding. Almost half of the remnant species were annuals. Species richness, Shannon-Wiener Index and above-ground biomass of vegetation exhibited an increasing pattern along the elevation gradient, being greater at higher elevations subjected to lower submergence stress. Plant diversity, above-ground biomass and species distribution were significantly influenced by the duration of submergence relative to elevation in both summer and previous winter. Several million tonnes of vegetation would be accumulated on the drawdown area of TGR in every summer and some adverse environmental problems may be introduced when it was submerged in winter. We conclude that vascular flora biodiversity in the drawdown area of TGR has dramatically declined after the impoundment to full capacity. The new hydrological condition, characterized by long-term winter flooding and short periods of summer flooding, determined vegetation biodiversity and above-ground biomass patterns along the elevation gradient in

Wang, Qiang Yuan, Xingzhong Willison, J H Martin Zhang, Yuewei Liu, Hong

Hydrological alternation can dramatically influence riparian environments and shape riparian vegetation zonation. However, it was difficult to predict the status in the drawdown area of the Three Gorges Reservoir (TGR), because the hydrological regime created by the dam involves both short periods of summer flooding and long-term winter impoundment for half a year. In order to examine the effects of hydrological alternation on plant diversity and biomass in the drawdown area of TGR, twelve sites distributed along the length of the drawdown area of TGR were chosen to explore the lateral pattern of plant diversity and above-ground biomass at the ends of growing seasons in 2009 and 2010. We recorded 175 vascular plant species in 2009 and 127 in 2010, indicating that a significant loss of vascular flora in the drawdown area of TGR resulted from the new hydrological regimes. Cynodon dactylon and Cyperus rotundus had high tolerance to short periods of summer flooding and long-term winter flooding. Almost half of the remnant species were annuals. Species richness, Shannon-Wiener Index and above-ground biomass of vegetation exhibited an increasing pattern along the elevation gradient, being greater at higher elevations subjected to lower submergence stress. Plant diversity, above-ground biomass and species distribution were significantly influenced by the duration of submergence relative to elevation in both summer and previous winter. Several million tonnes of vegetation would be accumulated on the drawdown area of TGR in every summer and some adverse environmental problems may be introduced when it was submerged in winter. We conclude that vascular flora biodiversity in the drawdown area of TGR has dramatically declined after the impoundment to full capacity. The new hydrological condition, characterized by long-term winter flooding and short periods of summer flooding, determined vegetation biodiversity and above-ground biomass patterns along the elevation gradient in

Halford, Keith Garcia, C. Amanda Fenelon, Joe Mirus, Benjamin B.

Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.