5.1: A very little thermodynamics - Biology

5.1: A very little thermodynamics - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

While the diversity of organisms and the unique properties of each individual organism are the products of evolutionary processes, initiated billions of years ago, it is equally important to recognize that all biological systems and processes, from growth and cell division to thoughts and feelings, obey the rules of chemistry and physics, and in particular the laws of thermodynamics. So, before we continue we have be clear about what it means and implies when we say that a system is at equilibrium versus being in a obligate non-equilibrium state.

To understand the meaning of thermodynamic equilibrium we have to learn the see the world differently, and learn new meanings for a number of words. First we have to make clear the distinction between the macroscopic world that we directly perceive and the sub-microscopic, molecular world that we can understand based on scientific observations and conclusions - it is this molecular world that is particularly important in the context of biological systems. The macroscopic and the molecular worlds behave very differently. To illustrate this point we will use a simpler model that displays the basic behaviors that we want to consider but is not as complex as a biological system. In our case let us consider a small, well-insulated air-filled room in which there is a table with a bar of gold – we use gold since it is chemically rather inert, that is, un-reactive. Iron bars, for example, could rust, which would complicate things. In our model the room is initially at a cosy 70 ºF (~21 ºC) and the bar of gold is at 200ºC. What will happen? Can you generate a graph that describes how the system will behave over time? Our first task is the define the system – that is, that the part of the universe in which we are we interested. We could define the system as the gold bar or the room with the gold bar in it. Notice, we are not really concerned about how the system came to be the way it is, its history. We could, if we wanted to, demonstrate quite convincingly that the system’s history will have no influence on its future behavior – this is a critical difference between biological and simple physicochemical systems. For now we will use the insulated room as the system, but it doesn't really matter as long as we clearly define what we consider the system to be.

Common sense tells us that energy will be transferred from the gold bar and the rest of the room and that the temperature of the gold bar will decrease over time; the behavior of system has a temporal direction. Why do you think that is? Why doesn't the hot bar get hotter and the room get cooler? We will come back to this question shortly. What may not be quite as obvious is that the temperature of the room will increase slightly as well. Eventually the block of gold and the room will reach the same temperature and the system will be said to be at equilibrium.

Remember we defined the system as isolated from the rest of the universe, but what does that mean? Basically, no matter or energy passes into or out of the room – such a system is said to be a closed system. Because it is a closed system, once the system reaches its final temperature, NºC, no further macroscopic change will occur. This does not mean, however, that nothing is going on. If we could look at the molecular level we would see that molecules of air are moving, constantly colliding with one another, and with molecules within the of bar and the table. The molecules within the bar and the table are also vibrating. These collisions can change the velocities of the colliding molecules. (What happens if there was no air in the room? How would this change your graph of the behavior of the system?) The speed of these molecular movements is a function of temperature, the higher (or lower) the temperature, the faster (or slower) these motions would be. As we will consider further on, all of the molecules in the system have kinetic energy, which is the energy of motion. Through their interactions, the kinetic energy of any one particular molecule will be constantly changing. At the molecular level the system is dynamic, even though at the macroscopic level it is static. We will come back to this insight repeatedly in our considerations of biological systems.

And this is what is important about a system at equilibrium: it is static. Even at the molecular level, while there is still movement, there is no net change. The energy of two colliding molecules is the same after a collision as before, even though the energy may be distributed differently between the colliding molecules. The system as a whole cannot really do anything. In physical terms, it cannot do work - no macroscopic changes are possible. This is a weird idea, since (at the molecular level) things are still moving. So, if we return to living systems, which are clearly able to do lots of things, including moving macroscopically, growing, thinking, and such, it is clear that they cannot be at equilibrium.

We can ask, what is necessary to keep a system from reaching equilibrium? The most obvious answer (we believe) is that unlike our imaginary closed room system, a non-equilibrium system must be open, that is, energy and matter must be able to enter and leave it. An open system is no longer isolated from the rest of the universe, it is part of it. For example, we could imagine a system in which energy, in the form of radiation, can enter and leave our room. We could maintain a difference in the temperature between the bar and the room by illuminating the bar and removing heat from the room as a whole. A temperature difference between the bar and the room could then (in theory) produce what is known as a heat engine, which can do work (that is, produce macroscopic change.) As long as we continue to heat one block and remove heat from the rest of the system, we can continue to do work, that is, macroscopically observable changes can happen.

Cryptobiosis: At this point, we have characterized organisms as dynamic, open, non-equilibrium systems. An apparent exception to the dynamic aspect of life are organisms that display a rather special phenotypic adaptation, known generically as cryptobiosis. Organisms, such as the tardigrad (or water bear), can be freeze-dried and persist in a state of suspended animation for decades. What is critical to note, however, is that when in this cryptobiotic state the organism is not at equilibrium, in much the same way that a piece of wood in air is not at equilibrium, but capable of reacting. The organism can be reanimated when returned to normal conditions147. Cryptobiosis is an genetically-based adaptation that takes energy to produce and energy is used to emerge from stasis. While the behavior of tardigrads is extreme, many organisms display a range of adaptive behaviors that enable them to survive hostile environmental conditions.

5. Thermodynamics¶

Statistical mechanics grew out of an earlier field called thermodynamics, which was concerned with the thermal properties of liquids and gasses. It grew up around it, and then subsumed it. What we now call “classical thermodynamics” was developed over a period of several hundred years, but much of the most important work was done in just a few decades from the 1820s through the 1850s. It is not at all a coincidence, of course, that this burst of activity coincided with the industrial revolution and the development of the locomotive. Classical thermodynamics was largely developed by people who wanted to learn how to make better steam engines.

Statistical mechanics has come a long way from these humble beginnings, but thermodynamics is still an important field in its own right. In this chapter, I will discuss some of the most important results of classical thermodynamics as seen from a modern statistical viewpoint.

4.1 Energy and Metabolism

Scientists use the term bioenergetics to describe the concept of energy flow (Figure 4.2) through living systems, such as cells. Cellular processes such as the building and breaking down of complex molecules occur through stepwise chemical reactions. Some of these chemical reactions are spontaneous and release energy, whereas others require energy to proceed. Just as living things must continually consume food to replenish their energy supplies, cells must continually obtain more energy to replenish that used by the many energy-requiring chemical reactions that constantly take place. Together, all of the chemical reactions that take place inside cells, including those that consume or generate energy, are referred to as the cell’s metabolism .

Metabolic Pathways

Consider the metabolism of sugar. This is a classic example of one of the many cellular processes that use and produce energy. Living things consume sugars as a major energy source, because sugar molecules have a great deal of energy stored within their bonds. For the most part, photosynthesizing organisms like plants produce these sugars. During photosynthesis, plants use energy (originally from sunlight) to convert carbon dioxide gas (CO2) into sugar molecules (like glucose: C6H12O6). They consume carbon dioxide and produce oxygen as a waste product. This reaction is summarized as:

Because this process involves synthesizing an energy-storing molecule, it requires energy input to proceed. During the light reactions of photosynthesis, energy is provided by a molecule called adenosine triphosphate (ATP), which is the primary energy currency of all cells. Just as the dollar is used as currency to buy goods, cells use molecules of ATP as energy currency to perform immediate work. In contrast, energy-storage molecules such as glucose are consumed only to be broken down to use their energy. The reaction that harvests the energy of a sugar molecule in cells requiring oxygen to survive can be summarized by the reverse reaction to photosynthesis. In this reaction, oxygen is consumed and carbon dioxide is released as a waste product. The reaction is summarized as:

Both of these reactions involve many steps.

The processes of making and breaking down sugar molecules illustrate two examples of metabolic pathways. A metabolic pathway is a series of chemical reactions that takes a starting molecule and modifies it, step-by-step, through a series of metabolic intermediates, eventually yielding a final product. In the example of sugar metabolism, the first metabolic pathway synthesized sugar from smaller molecules, and the other pathway broke sugar down into smaller molecules. These two opposite processes—the first requiring energy and the second producing energy—are referred to as anabolic pathways (building polymers) and catabolic pathways (breaking down polymers into their monomers), respectively. Consequently, metabolism is composed of synthesis (anabolism) and degradation (catabolism) (Figure 4.3).

It is important to know that the chemical reactions of metabolic pathways do not take place on their own. Each reaction step is facilitated, or catalyzed, by a protein called an enzyme. Enzymes are important for catalyzing all types of biological reactions—those that require energy as well as those that release energy.


Thermodynamics refers to the study of energy and energy transfer involving physical matter. The matter relevant to a particular case of energy transfer is called a system, and everything outside of that matter is called the surroundings. For instance, when heating a pot of water on the stove, the system includes the stove, the pot, and the water. Energy is transferred within the system (between the stove, pot, and water). There are two types of systems: open and closed. In an open system, energy can be exchanged with its surroundings. The stovetop system is open because heat can be lost to the air. A closed system cannot exchange energy with its surroundings.

Biological organisms are open systems. Energy is exchanged between them and their surroundings as they use energy from the sun to perform photosynthesis or consume energy-storing molecules and release energy to the environment by doing work and releasing heat. Like all things in the physical world, energy is subject to physical laws. The laws of thermodynamics govern the transfer of energy in and among all systems in the universe.

In general, energy is defined as the ability to do work, or to create some kind of change. Energy exists in different forms. For example, electrical energy, light energy, and heat energy are all different types of energy. To appreciate the way energy flows into and out of biological systems, it is important to understand two of the physical laws that govern energy.


The first law of thermodynamics states that the total amount of energy in the universe is constant and conserved. In other words, there has always been, and always will be, exactly the same amount of energy in the universe. Energy exists in many different forms. According to the first law of thermodynamics, energy may be transferred from place to place or transformed into different forms, but it cannot be created or destroyed. The transfers and transformations of energy take place around us all the time. Light bulbs transform electrical energy into light and heat energy. Gas stoves transform chemical energy from natural gas into heat energy. Plants perform one of the most biologically useful energy transformations on earth: that of converting the energy of sunlight to chemical energy stored within organic molecules (Figure 4.2). Some examples of energy transformations are shown in Figure 4.4.

The challenge for all living organisms is to obtain energy from their surroundings in forms that they can transfer or transform into usable energy to do work. Living cells have evolved to meet this challenge. Chemical energy stored within organic molecules such as sugars and fats is transferred and transformed through a series of cellular chemical reactions into energy within molecules of ATP. Energy in ATP molecules is easily accessible to do work. Examples of the types of work that cells need to do include building complex molecules, transporting materials, powering the motion of cilia or flagella, and contracting muscle fibers to create movement.

A living cell’s primary tasks of obtaining, transforming, and using energy to do work may seem simple. However, the second law of thermodynamics explains why these tasks are harder than they appear. All energy transfers and transformations are never completely efficient. In every energy transfer, some amount of energy is lost in a form that is unusable. In most cases, this form is heat energy. Thermodynamically, heat energy is defined as the energy transferred from one system to another that is not work. For example, when a light bulb is turned on, some of the energy being converted from electrical energy into light energy is lost as heat energy. Likewise, some energy is lost as heat energy during cellular metabolic reactions.

An important concept in physical systems is that of order and disorder. The more energy that is lost by a system to its surroundings, the less ordered and more random the system is. Scientists refer to the measure of randomness or disorder within a system as entropy. High entropy means high disorder and low energy. Molecules and chemical reactions have varying entropy as well. For example, entropy increases as molecules at a high concentration in one place diffuse and spread out. The second law of thermodynamics says that energy will always be lost as heat in energy transfers or transformations.

Living things are highly ordered, requiring constant energy input to be maintained in a state of low entropy.

Potential and Kinetic Energy

When an object is in motion, there is energy associated with that object. Think of a wrecking ball. Even a slow-moving wrecking ball can do a great deal of damage to other objects. Energy associated with objects in motion is called kinetic energy (Figure 4.5). A speeding bullet, a walking person, and the rapid movement of molecules in the air (which produces heat) all have kinetic energy.

Now what if that same motionless wrecking ball is lifted two stories above ground with a crane? If the suspended wrecking ball is unmoving, is there energy associated with it? The answer is yes. The energy that was required to lift the wrecking ball did not disappear, but is now stored in the wrecking ball by virtue of its position and the force of gravity acting on it. This type of energy is called potential energy (Figure 4.5). If the ball were to fall, the potential energy would be transformed into kinetic energy until all of the potential energy was exhausted when the ball rested on the ground. Wrecking balls also swing like a pendulum through the swing, there is a constant change of potential energy (highest at the top of the swing) to kinetic energy (highest at the bottom of the swing). Other examples of potential energy include the energy of water held behind a dam or a person about to skydive out of an airplane.

Potential energy is not only associated with the location of matter, but also with the structure of matter. Even a spring on the ground has potential energy if it is compressed so does a rubber band that is pulled taut. On a molecular level, the bonds that hold the atoms of molecules together exist in a particular structure that has potential energy. Remember that anabolic cellular pathways require energy to synthesize complex molecules from simpler ones and catabolic pathways release energy when complex molecules are broken down. The fact that energy can be released by the breakdown of certain chemical bonds implies that those bonds have potential energy. In fact, there is potential energy stored within the bonds of all the food molecules we eat, which is eventually harnessed for use. This is because these bonds can release energy when broken. The type of potential energy that exists within chemical bonds, and is released when those bonds are broken, is called chemical energy. Chemical energy is responsible for providing living cells with energy from food. The release of energy occurs when the molecular bonds within food molecules are broken.

Concepts in Action

Visit the site and select “Pendulum” from the “Work and Energy” menu to see the shifting kinetic and potential energy of a pendulum in motion.

Free and Activation Energy

After learning that chemical reactions release energy when energy-storing bonds are broken, an important next question is the following: How is the energy associated with these chemical reactions quantified and expressed? How can the energy released from one reaction be compared to that of another reaction? A measurement of free energy is used to quantify these energy transfers. Recall that according to the second law of thermodynamics, all energy transfers involve the loss of some amount of energy in an unusable form such as heat. Free energy specifically refers to the energy associated with a chemical reaction that is available after the losses are accounted for. In other words, free energy is usable energy, or energy that is available to do work.

If energy is released during a chemical reaction, then the change in free energy, signified as ∆G (delta G) will be a negative number. A negative change in free energy also means that the products of the reaction have less free energy than the reactants, because they release some free energy during the reaction. Reactions that have a negative change in free energy and consequently release free energy are called exergonic reactions . Think: exergonic means energy is exiting the system. These reactions are also referred to as spontaneous reactions, and their products have less stored energy than the reactants. An important distinction must be drawn between the term spontaneous and the idea of a chemical reaction occurring immediately. Contrary to the everyday use of the term, a spontaneous reaction is not one that suddenly or quickly occurs. The rusting of iron is an example of a spontaneous reaction that occurs slowly, little by little, over time.

If a chemical reaction absorbs energy rather than releases energy on balance, then the ∆G for that reaction will be a positive value. In this case, the products have more free energy than the reactants. Thus, the products of these reactions can be thought of as energy-storing molecules. These chemical reactions are called endergonic reactions and they are non-spontaneous. An endergonic reaction will not take place on its own without the addition of free energy.

Visual Connection

Look at each of the processes shown and decide if it is endergonic or exergonic.

There is another important concept that must be considered regarding endergonic and exergonic reactions. Exergonic reactions require a small amount of energy input to get going, before they can proceed with their energy-releasing steps. These reactions have a net release of energy, but still require some energy input in the beginning. This small amount of energy input necessary for all chemical reactions to occur is called the activation energy .

Concepts in Action

Watch an animation of the move from free energy to transition state of the reaction.


A substance that helps a chemical reaction to occur is called a catalyst, and the molecules that catalyze biochemical reactions are called enzymes . Most enzymes are proteins and perform the critical task of lowering the activation energies of chemical reactions inside the cell. Most of the reactions critical to a living cell happen too slowly at normal temperatures to be of any use to the cell. Without enzymes to speed up these reactions, life could not persist. Enzymes do this by binding to the reactant molecules and holding them in such a way as to make the chemical bond-breaking and -forming processes take place more easily. It is important to remember that enzymes do not change whether a reaction is exergonic (spontaneous) or endergonic. This is because they do not change the free energy of the reactants or products. They only reduce the activation energy required for the reaction to go forward (Figure 4.7). In addition, an enzyme itself is unchanged by the reaction it catalyzes. Once one reaction has been catalyzed, the enzyme is able to participate in other reactions.

The chemical reactants to which an enzyme binds are called the enzyme’s substrates . There may be one or more substrates, depending on the particular chemical reaction. In some reactions, a single reactant substrate is broken down into multiple products. In others, two substrates may come together to create one larger molecule. Two reactants might also enter a reaction and both become modified, but they leave the reaction as two products. The location within the enzyme where the substrate binds is called the enzyme’s active site . The active site is where the “action” happens. Since enzymes are proteins, there is a unique combination of amino acid side chains within the active site. Each side chain is characterized by different properties. They can be large or small, weakly acidic or basic, hydrophilic or hydrophobic, positively or negatively charged, or neutral. The unique combination of side chains creates a very specific chemical environment within the active site. This specific environment is suited to bind to one specific chemical substrate (or substrates).

Active sites are subject to influences of the local environment. Increasing the environmental temperature generally increases reaction rates, enzyme-catalyzed or otherwise. However, temperatures outside of an optimal range reduce the rate at which an enzyme catalyzes a reaction. Hot temperatures will eventually cause enzymes to denature, an irreversible change in the three-dimensional shape and therefore the function of the enzyme. Enzymes are also suited to function best within a certain pH and salt concentration range, and, as with temperature, extreme pH, and salt concentrations can cause enzymes to denature.

For many years, scientists thought that enzyme-substrate binding took place in a simple “lock and key” fashion. This model asserted that the enzyme and substrate fit together perfectly in one instantaneous step. However, current research supports a model called induced fit (Figure 4.8). The induced-fit model expands on the lock-and-key model by describing a more dynamic binding between enzyme and substrate. As the enzyme and substrate come together, their interaction causes a mild shift in the enzyme’s structure that forms an ideal binding arrangement between enzyme and substrate.

Concepts in Action

When an enzyme binds its substrate, an enzyme-substrate complex is formed. This complex lowers the activation energy of the reaction and promotes its rapid progression in one of multiple possible ways. On a basic level, enzymes promote chemical reactions that involve more than one substrate by bringing the substrates together in an optimal orientation for reaction. Another way in which enzymes promote the reaction of their substrates is by creating an optimal environment within the active site for the reaction to occur. The chemical properties that emerge from the particular arrangement of amino acid R groups within an active site create the perfect environment for an enzyme’s specific substrates to react.

The enzyme-substrate complex can also lower activation energy by compromising the bond structure so that it is easier to break. Finally, enzymes can also lower activation energies by taking part in the chemical reaction itself. In these cases, it is important to remember that the enzyme will always return to its original state by the completion of the reaction. One of the hallmark properties of enzymes is that they remain ultimately unchanged by the reactions they catalyze. After an enzyme has catalyzed a reaction, it releases its product(s) and can catalyze a new reaction.

It would seem ideal to have a scenario in which all of an organism's enzymes existed in abundant supply and functioned optimally under all cellular conditions, in all cells, at all times. However, a variety of mechanisms ensures that this does not happen. Cellular needs and conditions constantly vary from cell to cell, and change within individual cells over time. The required enzymes of stomach cells differ from those of fat storage cells, skin cells, blood cells, and nerve cells. Furthermore, a digestive organ cell works much harder to process and break down nutrients during the time that closely follows a meal compared with many hours after a meal. As these cellular demands and conditions vary, so must the amounts and functionality of different enzymes.

Since the rates of biochemical reactions are controlled by activation energy, and enzymes lower and determine activation energies for chemical reactions, the relative amounts and functioning of the variety of enzymes within a cell ultimately determine which reactions will proceed and at what rates. This determination is tightly controlled in cells. In certain cellular environments, enzyme activity is partly controlled by environmental factors like pH, temperature, salt concentration, and, in some cases, cofactors or coenzymes.

Enzymes can also be regulated in ways that either promote or reduce enzyme activity. There are many kinds of molecules that inhibit or promote enzyme function, and various mechanisms by which they do so. In some cases of enzyme inhibition, an inhibitor molecule is similar enough to a substrate that it can bind to the active site and simply block the substrate from binding. When this happens, the enzyme is inhibited through competitive inhibition , because an inhibitor molecule competes with the substrate for binding to the active site.

On the other hand, in noncompetitive inhibition , an inhibitor molecule binds to the enzyme in a location other than the active site, called an allosteric site, but still manages to prevent substrate binding to the active site. Some inhibitor molecules bind to enzymes in a location where their binding induces a conformational change that reduces the enzyme activity as it no longer effectively catalyzes the conversion of the substrate to product. This type of inhibition is called allosteric inhibition (Figure 4.9). Most allosterically regulated enzymes are made up of more than one polypeptide, meaning that they have more than one protein subunit. When an allosteric inhibitor binds to a region on an enzyme, all active sites on the protein subunits are changed slightly such that they bind their substrates with less efficiency. There are allosteric activators as well as inhibitors. Allosteric activators bind to locations on an enzyme away from the active site, inducing a conformational change that increases the affinity of the enzyme’s active site(s) for its substrate(s) (Figure 4.9).

Career Connection

Pharmaceutical Drug Developer

Enzymes are key components of metabolic pathways. Understanding how enzymes work and how they can be regulated are key principles behind the development of many of the pharmaceutical drugs on the market today. Biologists working in this field collaborate with other scientists to design drugs (Figure 4.10).

Consider statins for example—statins is the name given to one class of drugs that can reduce cholesterol levels. These compounds are inhibitors of the enzyme HMG-CoA reductase, which is the enzyme that synthesizes cholesterol from lipids in the body. By inhibiting this enzyme, the level of cholesterol synthesized in the body can be reduced. Similarly, acetaminophen, popularly marketed under the brand name Tylenol, is an inhibitor of the enzyme cyclooxygenase. While it is used to provide relief from fever and inflammation (pain), its mechanism of action is still not completely understood.

How are drugs discovered? One of the biggest challenges in drug discovery is identifying a drug target. A drug target is a molecule that is literally the target of the drug. In the case of statins, HMG-CoA reductase is the drug target. Drug targets are identified through painstaking research in the laboratory. Identifying the target alone is not enough scientists also need to know how the target acts inside the cell and which reactions go awry in the case of disease. Once the target and the pathway are identified, then the actual process of drug design begins. In this stage, chemists and biologists work together to design and synthesize molecules that can block or activate a particular reaction. However, this is only the beginning: If and when a drug prototype is successful in performing its function, then it is subjected to many tests from in vitro experiments to clinical trials before it can get approval from the U.S. Food and Drug Administration to be on the market.

Many enzymes do not work optimally, or even at all, unless bound to other specific non-protein helper molecules. They may bond either temporarily through ionic or hydrogen bonds, or permanently through stronger covalent bonds. Binding to these molecules promotes optimal shape and function of their respective enzymes. Two examples of these types of helper molecules are cofactors and coenzymes. Cofactors are inorganic ions such as ions of iron and magnesium. Coenzymes are organic helper molecules, those with a basic atomic structure made up of carbon and hydrogen. Like enzymes, these molecules participate in reactions without being changed themselves and are ultimately recycled and reused. Vitamins are the source of coenzymes. Some vitamins are the precursors of coenzymes and others act directly as coenzymes. Vitamin C is a direct coenzyme for multiple enzymes that take part in building the important connective tissue, collagen. Therefore, enzyme function is, in part, regulated by the abundance of various cofactors and coenzymes, which may be supplied by an organism’s diet or, in some cases, produced by the organism.

Feedback Inhibition in Metabolic Pathways

Molecules can regulate enzyme function in many ways. The major question remains, however: What are these molecules and where do they come from? Some are cofactors and coenzymes, as you have learned. What other molecules in the cell provide enzymatic regulation such as allosteric modulation, and competitive and non-competitive inhibition? Perhaps the most relevant sources of regulatory molecules, with respect to enzymatic cellular metabolism, are the products of the cellular metabolic reactions themselves. In a most efficient and elegant way, cells have evolved to use the products of their own reactions for feedback inhibition of enzyme activity. Feedback inhibition involves the use of a reaction product to regulate its own further production (Figure 4.11). The cell responds to an abundance of the products by slowing down production during anabolic or catabolic reactions. Such reaction products may inhibit the enzymes that catalyzed their production through the mechanisms described above.

The production of both amino acids and nucleotides is controlled through feedback inhibition. Additionally, ATP is an allosteric regulator of some of the enzymes involved in the catabolic breakdown of sugar, the process that creates ATP. In this way, when ATP is in abundant supply, the cell can prevent the production of ATP. On the other hand, ADP serves as a positive allosteric regulator (an allosteric activator) for some of the same enzymes that are inhibited by ATP. Thus, when relative levels of ADP are high compared to ATP, the cell is triggered to produce more ATP through sugar catabolism.

NEET Paper Analysis 2019

According to the applicants, the NEET 2019 question paper was moderately difficult. Some students found it lengthy. All three sections were a bit difficult. Physics section was a little bit tricky. The Chemistry section was also found moderately difficult. Biology section was easy compared to other sections but some of the questions were tough.

NEET 2019 Physics Paper Analysis

  • Physics is the toughest section among the three sections in NEET 2019 exam.
  • The aspirants will need to crack conceptually based and tricky questions in the exam.
  • Physics questions were usually critical and the aspirants needed to do heavy calculations.

NEET 2019 Chemistry Paper Analysis

  • Chemistry section was moderately complex. Hence, the aspirants got some direct questions from NCERT text book.
  • This was the scoring section in NEET 2019 exam.
  • As per the expert’s in depth NEET analysis, NEET 2019 chemistry was comparatively easier than the previous years.

NEET 2019 Biology Paper Analysis

  • Usually, Biology is considered as the easiest section, however, last year, the Biology section was not so easy.
  • The trend of NEET 2019 Biology questions was time-consuming and descriptive.
  • The high weightage chapters of NEET 2019 biology section were from Microbes in Human Welfare and Cell. These are the chapters of Botany and Zoology respectively.
  • To get a high score in the Biology section, the aspirants are required to have in-depth knowledge of concepts.

NEET 2019 Topic Wise Analysis (Physics & Chemistry)

Topics Easy Average Difficult % Portion
Mechanics 3 4 2 22%
Fluids 1 0 0 1%
Thermal Physics 1 3 2 15%
SHM & Waves 1 0 0 5%
Electrodynamics 2 3 5 22%
Optics 4 2 1 12%
Modern & Electronics 4 2 1 16%
Inorganic 4 5 3 50%
Physical 5 3 2 30%
Organic 7 5 1 35%

NEET 2019 Topic Wise Analysis (Biology)


Topics Easy Medium Difficult Total
Plant Diversity 2 5 1 8
Ecology 3 8 0 11
Cell Biology and Cell division 1 4 0 5
Plant Morphology 2 0 0 2
Biomolecule 0 1 0 1
Plant Physiology 3 3 1 7
Plant reproduction 3 2 0 5
Genetics & Biotechnology 9 7 0 16
Plant Anatomy 3 1 0 4
Total 26 31 2 59


Topics Easy Medium Difficult Total
Animal Diversity 2 2 0 4
Structural Organisation in Animal 1 0 0 1
Human Physiology 8 6 1 15
Human Reproduction and Reproductive health 3 0 2 5
Origin and Evolution 3 0 0 3
Human Health and Diseases 2 0 1 3
Total 19 8 4 31

NEET 2019 Paper Analysis by Allen Kota

According to Allen Kota’s experts, NEET 2019 paper ranked moderate to easy paper. Physics and Chemistry were easy according to Allen Kota analysis whereas the Biology paper of NEET 2019 UG exam, was somewhat lengthy. Below given table represents subject-wise difficulty level analysis by Allen Kota:

Subject No. of Easy Questions No. of Medium Questions No. of Difficult Questions
Physics 21 21 3
Chemistry 27 13 5
Botany 32 17 1
Zoology 24 11 5

NEET 2019 Paper Analysis by Resonance

As per analysis by Resonance, the difficulty of Physics in NEET 2019 was higher than Chemistry and Biology. Among the total 45 multiple choice questions in Physics, 9 were difficult and tricky whereas Chemistry had only one very difficult question. The Biology section had a total of 17 difficult questions out of 90 multiple choice questions.

Overall NEET 2019 was found to be quite easier as compared to NEET 2018.The subject matter experts of Resonance said that questions from all subjects were found to be easier than the exam conducted in the last three years, therefore the cutoff should increase and is expected to be around 565 to 570.

Laws of Thermodynamics in Bioenergetics (With Diagram)

Thermodynamics is the study of energy changes, that is, the conversion of energy from one form into an­other. Such changes obey the first two laws of thermo­dynamics.

The First Law of Thermodynamics:

The first law is concerned with the conversion of en­ergy within a “system,” where a system is defined as a body (e.g., a cell or an organism) and its surround­ings.

This law, which applies to both biological and non-biological systems, states the following: Energy cannot be created or destroyed but can be converted from one form into another: during such a conversion, the total amount of the energy of the system remains constant.

This law applies to all levels of organization in the liv­ing world it applies to organisms, cells, organelles, and to the individual chemical reactions that char­acterize metabolism. In practice, it is difficult to measure the energy possessed by cells (i.e., to limit the “system” to an individual cell), because energy may escape into the environment surrounding the cell during the measurement.

Similarly, energy may be ac­quired by the cell from its environment for example, a photosynthesizing cell absorbs energy from its envi­ronment in the form of light. A cell’s acquisition of en­ergy from its environment (or its loss to the environment) should not be confused with the destruc­tion or creation of energy, which according to the first law of thermodynamics does not occur.

From a biological viewpoint, the first law of thermo­dynamics indicates that at any given moment a cell possesses a specific quantity of energy.

This energy takes several forms it includes:

(1) Potential energy (e.g., the energy of the bonds that link atoms together in a molecule or the pressure-volume relationships within the cell as a whole or within membrane- enclosed intracellular components)

(2) Electrical en­ergy (e.g., the distribution of different amounts of electrical charge across cellular membranes) and

(3) Thermal energy (e.g., the temperature-dependent constant and random motions of molecules and at­oms).

According to the first law, these forms of energy may be inter-converted for example, some of the cell’s potential energy can be converted into electrical or thermal energy, but the cell cannot create or destroy energy. When a cell breaks down a polysaccharide to ultimately form CO2 and H2O, some of the potential energy present in the carbohydrate is conserved as potential energy by phosphorylating ADP, thereby forming ATP.

The ATP so produced represents a new energy source (and also one that is of greater immedi­ate utility for the cell). However, not all of the energy of the original carbohydrate is conserved as potential energy some of it becomes thermal energy and is transferred to the surroundings as heat. It is impor­tant to recognize that none of the energy is destroyed and it should be possible to account for all of the en­ergy originally present in the polysaccharide in other forms within the system (i.e., in the ATP that is pro­duced and in the heat that is released).

The Second Law of Thermodynamics:

The first law of thermodynamics tells us that the total energy of an isolated system consisting of a cell (or or­ganism) and its surroundings is the same before and after a series of events or chemical reactions has taken place. What the first law does not tell us is the direction in which the reactions proceed.

This prob­lem can be illustrated using a simple example. Sup­pose we place a small cube of ice in a liter of hot water, seal the combination in an insulated container (e.g., a vacuum bottle), and allow the system (i.e., the ice and the water) to reach an equilibrium.

In such a system, we would not be surprised to find that the ice melts and that this is accompanied by a decrease in the tem­perature of the water. When we later examine the sys­tem, we find that we are left only with water (no ice) and that the water is at a reduced temperature.

The flow of heat, which is thermal energy, from the hot wa­ter to thrice thereby causing the ice to melt is sponta­neous and the energy that is “lost” by the water is “gained” by the melting ice so that the total energy of the system remains the same.

We certainly would not expect ice to form spontaneously in a sealed system that contains warm water, even though such an eventu­ality is not prohibited by the first law. Consequently, the important lessons to be learned from this illustra­tion are that energy changes have direction and may be spontaneous.

To anticipate the spontaneity of a reaction and pre­dict its direction, one must take into account a func­tion called entropy. Entropy is a measure of the de­gree of randomness or disorder of a system, the entropy increasing with increasing disorder. Accord­ingly, the second law of thermodynamics states: In all processes involving energy changes within a system, the entropy of the system increases until an equilibrium is attained.

In the illustration that was presented above, the highly ordered distribution of energy (i.e., large amounts of energy in the hot water and smaller amounts of energy in the ice) was lost as the ice melted to form water. In the resulting warm water, the energy was more randomly and uniformly distrib­uted among the water molecules.

The units of entropy are J/mole (or cal/mole), indi­cating that entropy is measured in terms of the amount of substance present. When equal numbers of moles of a solid, liquid, and gas are compared at the same temperature, the solid has less entropy than the liquid and the liquid has less entropy than the gas (the gaseous state is the state of greatest disorder).

En­tropy can be thought of as the energy of a system that is of no value for performing work (i.e., it is not “use­ful” energy). For example, the catabolism of sucrose or other sugars by a cell is accompanied by the forma­tion of energy-rich ATP.

Although superficially it may appear as though useful energy has increased in the form of the ATP gained by the cell, the total amount of useful energy has actually decreased and the amount of unavailable energy increased. It is true that some of the potential energy of the sugar has been converted to potential energy in the form of ATP, but some has also been converted to thermal energy, which tends to raise the temperature of the cell and therefore its en­tropy.

Suggestions that cells can decrease entropy by carrying out photosynthesis are misleading. Although it is true that during photosynthesis cells convert mol­ecules with very little potential energy (CO2 and H2O) into larger molecules with considerably more poten­tial energy (sugars) and that there is an accompanying decrease in the entropy of the cell, energy in the form of light was absorbed from the cell’s environment.

Be­cause the light energy consumed during photosynthe­sis is a part of the whole system (i.e., the cell and its surroundings), it is clear that there has actually been an overall decrease in useful energy and an increase in entropy (see Fig. 9-4).

The entropy change during a reaction may be quite small. For example, when sucrose undergoes hydroly­sis to form the sugars glucose and fructose, much of the potential energy of the original sucrose is present in the resulting glucose and fructose molecules. Changes in entropy are extremely difficult to calcu­late, but the difficulty can be circumvented by employ­ing two other thermodynamic functions: enthalpy or heat content (denoted H) and free energy (denoted G).

The change in a system’s enthalpy (∆H) is a measure of the total change in energy that has taken place, whereas the change in free energy (∆G) is the change in the amount of energy available to do work. Changes in entropy (∆S), enthalpy, and free energy are related by the equation in which T is the absolute temperature of the system.

The change in free energy can also be defined as the total amount of free energy in the products of a reac­tion minus the total amount of free energy in the reactants, that is,

∆G = G (products) – G(reactants) …(9-2)

A reaction that has a negative ∆G value (i.e., the sum of the free energy of the products is less than that of the reactants) will occur spontaneously, a reaction for which the ∆G is zero is at equilibrium and a reaction that has a positive AG value will not occur spontane­ously and proceeds only when energy is supplied from some outside source.

The hydrolysis of sucrose

Sucrose + H2O → glucose + fructose

has a negative AG value, and therefore when sucrose is added to water, there is the spontaneous conversion of some of the sucrose molecules to glucose and fruc­tose. However, the reverse reaction

glucose + fructose →sucrose + H2O

has an equal but positive ∆G value and therefore does not occur without an input of energy. Hence, special attention must be paid to the direction in which the reaction is written (i.e., the direction of the arrow) and the sign of the ∆G value. If 5 moles of sucrose are mixed with water, the formation of glucose and fruc­tose will take place spontaneously and the ∆G may be determined this value is, of course, greater than if 4 or 2 miles of sucrose are used.

Thus, ∆G values are dependent on the amounts and concentrations of reac­tants and products. More uniform standards of refer­ence that have been established by convention are the standard free energy changes, ∆G 0 and ∆G 0 ‘ values. ∆G 0 represents the change in free energy that takes place when the reactants and products are maintained at 1.0 molar concentrations (strictly speaking, 1.0 molar) during the course of the reaction and the reaction proceeds under standard conditions of temperature (25°C) and pressure (1 atmosphere) and at pH 0.0.

The ∆G 0 ‘ value is a much more practical term for use with biological systems in which reactions take place in an aqueous environment and at a pH that usually is either equal or close to 7.0. The ∆G 0 ‘ value is defined as the standard free energy change that takes place at pH 7.0 when the reactants and products are main­tained at 1.0 molar concentration (Table 9-2).

The changes in standard free energy are indepen­dent of the route that leads from the initial reactants to the final products. For example, glucose can be con­verted to carbon dioxide and water either by combus­tion in the presence of oxygen or through the actions of cellular enzymes.

Changes in standard free energy are the same, regardless of the method that is used thus, the value of the standard free energy change provides no information about the reaction sequence by which the change has taken place. By the same to­ken, the values obtained for changes in standard free energy tell us nothing about the rate at which the changes have taken place.

The ∆G 0 ‘ can be calculated from the equilibrium constant, K’eq, of a reaction using the relationship

Where R is the gas constant (8.314 J/mole/degree), T is the absolute temperature (in degrees Kelvin), -and K’eq is the equilibrium constant. Table 9-3 lists a num­ber of ∆G 0 ‘ values for common reactions.

The equilibrium constant is defined as

Where [A] and [J3] are the concentrations of the reac­tants and [C] and [D] are the concentrations of the products. If the equilibrium constant is 1.0, then the ∆G 0 ‘ value equals zero. If the equilibrium constant is greater than 1.0, then the ∆G 0 ‘ value is negative (e.g., -11.41 kJ/mole for a K’eq value of 100), and the reac­tion is said to be exergonic (i.e., “energy releasing”) because it proceeds spontaneously in the direction written when starting with unimolar concentrations of reactants and products.

When the K’eq value is less than 1.0, the ∆G 0 ‘ value is positive (e.g., 5.71 kJ/mole for a K’eq of 0.1), and the reaction is said to be endergonic (i.e., “energy consuming”) because it does not proceed spontaneously in the direction written when starting with unimolar concentrations of reactants and products.

Calculations of AG 0 ‘ values are usually based on ex­perimental measurements of isolated reactions, that is, with reactions that take place independently of other reactions and that are not associated with cells. ∆G 0 and ∆G 0 ‘ values do not provide information about the free energy changes of reactions as they might take place in cells or under conditions in which the concentrations of reactants and prod acts, pH, etc., may change. This may be dramatically illustrated by considering the following example. At pH 7.0 and 25°C, the equilibrium constant for the reaction dihydroxyacetone phosphate → glyceraldehyde-3-phosphate is 0.0475. Therefore, using equation 9-3,

= – 2.303 (8.314 J/mole/degree) (298) log10 (0.0475)

The positive value indicates that this reaction does not proceed spontaneously in the direction written. How­ever, in cells, this reaction is but one of a series of re­actions in a metabolic pathway called glycolysis. Other reactions of glycolysis that occur prior to this one and that have negative ∆G 0 ‘ values produce additional substrate (i.e., dihydroxyacetone phosphate) and reactions with negative ∆G 0 ‘ values that occur after this step remove the product glyceraldehyde-3-phosphate.

As a result, the reaction proceeds in the direction written under the conditions specified above, even though the ∆G 0 ‘ value is positive. This example illustrates the important point that the ∆G 0 ‘ value for a specific biological reaction cannot be used to predict reliably whether or not that particular reaction is actually taking place within the cell.

Customers who bought this item also bought


&ldquoThe second law of thermodynamics may be the most poorly understood and taught of all of our laws. This bold book meets this admirable challenge, explaining thermodynamics to a popular audience.&rdquo&mdashJohn Wettlaufer, Yale University

&ldquoThis concise and lucid book pays tribute to the utility and majesty of thermodynamics, evolved from puffs of steam to rank among the mightiest of the sciences.&rdquo&mdashDudley Herschbach, Nobel Laureate in Chemistry, 1986

&ldquoThe author draws on his mastery of thermodynamics to produce a valuable contribution that is entertaining, informative, and eminently readable.&rdquo&mdashDavid J. Wales FRS, University of Cambridge

&ldquoThis is a fascinating, thoughtful, and carefully constructed journey through the concepts, development, and significance of a fundamental part of science, its interdependence on technological development, and science&rsquos continuing evolution.&rdquo&mdashIlan Chabay, Institute for Advanced Sustainability Studies

&ldquoWhether you&rsquore new to this fundamental science or a seasoned pro, this terrific book makes thermodynamics come alive.&rdquo&mdashHarry Gray, California Institute of Technology

Theoretical Foundations of Molecular Magnetism

2.2.2 Heat capacities

In classical thermodynamics dealing with the volume work, two types of heat capacity are distinguished: CV and Cp. The specific heat cV [J g − 1 K − 1 ] and the molar heat capacity CV [J mol − 1 K − 1 ] are interrelated through

where Mr is the molar mass.

The difference between CV and Cp for solids is

Making use of the relationship among state variables p = p(V, T) in the differential form

at constant pressure, dp = 0, and thus we get

Then the difference between the molar heat capacities becomes

where Vm is the molar volume. The isothermal compressibility, measured at constant temperature, is introduced through the formula

and the coefficient of volumetric expansion is

For isotropic solids, β is related to the coefficient of linear thermal expansion α by

where l is the dimension of a cube of the solid. The mechanical stability of a solid requires KT > 0 and consequently CpCV. Although data on linear thermal expansion are available for many solids over a broad interval of temperature, experimental values of compressibility are often known around room temperature only.

By introducing the adiabatic compressibility measured at constant entropy

one can derive an expression

In thermodynamics dealing with the magnetic work the following types of heat capacity are defined

It is of interest to express CHCM and CH/CM in terms of observables (Η, Μ, T). Making use of expressions for entropy

and taking into account the relationships listed in Table 2.3 , we get

This equation can be rewritten as

and compared with the identity

The same coefficients of dM require the validity of

The boundary between observables allows us to eliminate

and finally we arrive at the formula

The second expression for entropy can also be written in the form of

and compared with the identity

yielding the same coefficient at dH, i.e.

The first expression for entropy can be rearranged as

and again compared with the identity

giving the same coefficient at dM, viz.

The ratio of the magnetic heat capacities stands

Now we introduce the isothermal susceptibility (measured at constant temperature)

and the adiabatic susceptibility (measured at constant entropy)

The ratio of the magnetic heat capacities is just the ratio of the magnetic susceptibilities

Important relationships among heat capacities are collected in Table 2.4 . It is worth noting that the relations between heat capacities and compressibilities in classical thermodynamics have an analogy in the relations between magnetic heat capacities and magnetic susceptibilities.

Table 2.4 . Relationships for heat capacities

Function a Heat capacityExpression
(a) Volume work
U(S,V) C V = ∂ U ∂ T V C V = N A ∂ ∂ T k T 2 ∂ ln Z ∂ T V V
E(S, p) C p = ∂ E ∂ T p C p = N A ∂ ∂ T k T 2 ∂ ln Z ∂ T V + k T ∂ ln Z ∂ ln Z T p
CpCV T ∂ p ∂ T V ∂ V ∂ T p = − T ∂ p ∂ V T ∂ V ∂ T p 2
Cp/CV ∂ V ∂ p T / ∂ V ∂ p S = K T / K S
S(T, V) d S = C V T d T + ∂ p ∂ T V d V
S(T, p) d S = C p T d T − ∂ p ∂ T p d p
(b) Magnetic work
U(S, M) C M = ∂ U ∂ T M C M = N A ∂ ∂ T k T 2 ∂ ln Z ∂ T M M
E(S, H) C H = ∂ E ∂ T H C H = N A ∂ ∂ T k T 2 ∂ ln Z ∂ T M + k T ∂ ln Z ∂ ln M T H
CHCM − μ 0 T ∂ H ∂ T M ∂ M ∂ T H = μ 0 T ∂ H ∂ M T ∂ M ∂ T H 2
CH/CM ∂ M ∂ H T / ∂ M ∂ H S = χ T / χ S
S(T, M) d S = C M T d T − μ 0 ∂ H ∂ T M d M
S(T, H) d S = C H T d T + μ 0 ∂ M ∂ T H d H

Physics in Biology and Medicine

Physics in Biology and Medicine, Fourth Edition, covers topics in physics as they apply to the life sciences, specifically medicine, physiology, nursing and other applied health fields. This is a concise introductory paperback that provides practical techniques for applying knowledge of physics to the study of living systems and presents material in a straightforward manner requiring very little background in physics or biology. Applicable courses are Biophysics and Applied Physics.

This new edition discusses biological systems that can be analyzed quantitatively, and how advances in the life sciences have been aided by the knowledge of physical or engineering analysis techniques. The volume is organized into 18 chapters encompassing thermodynamics, electricity, optics, sound, solid mechanics, fluid mechanics, and atomic and nuclear physics. Each chapter provides a brief review of the background physics before focusing on the applications of physics to biology and medicine. Topics range from the role of diffusion in the functioning of cells to the effect of surface tension on the growth of plants in soil and the conduction of impulses along the nervous system. Each section contains problems that explore and expand some of the concepts. The text includes many figures, examples and illustrative problems and appendices which provide convenient access to the most important concepts of mechanics, electricity, and optics in the body.

Physics in Biology and Medicine will be a valuable resource for students and professors of physics, biology, and medicine, as well as for applied health workers.

Babies with very low or very high birth weight are less likely to survive. The graph shows the percentage of babies bom at different weights. % Babies Born at Different Weights - Babies Born in that Category 53.5 lb I56-06 10.0 10.5 1b 4.0-4.5 5.0-5.5 lb 6.0-6.5 in 7.0-7.5 lb Which statement is a valid claim that could be made using the data in the graph? Directional selection is occurring because the graph favors an extreme. Mark this and return Save and Exit Next Submit

Stabilizing selection is occurring because the average is favored.

There are different types of natural selection: sexual selection, stabilizing selection, directional selection, frequency-dependent selection, and disruptive selection.

Balancing selection, also called Stabilizing selection, eliminates individual with extreme traits and favors individual that exhibit medium-range characteristics, that gets to survive.

In the exposed example, babies with very low or very high birth weight have fewer possibilities to survive. Stabilizing selection eliminates individuals with extreme weight and favors individuals that exhibit medium weight, which are the ones that get to survive.

The information provided clearly shows that the babies with average weights form a stable selection, the average is favoured.

Stabilizing selection is occurring because the average is favored.

Concluding Part of the question

A graph entitled Percentage of Babies born at Different Weights has weight in pounds on the horizontal axis, and percentage on the vertical axis. A small percentage of babies are born at the low and higher birth weights, and a greater amount are around 7 to 8 pounds.

Which statement is a valid claim that could be made using the data in the graph?

A) Directional selection is occurring because the graph favors an extreme.

B) Disruptive selection is occurring because the two extremes are favored.

C) Stabilizing selection is occurring because the average is favored.

D) Biodiversity variation is occurring because there is an increase in trait variation.

Stabilizing selection represents a form of natural selection where the intermediate variants of a trait, that is the non-extreme trait values are the ones that ensure the survival and subsequent formation of stable population of the species with this average traits.

In this question, it is given through the data from the graph that babies with very low or very high weights are less likely to survive.

And it is logical as very low weight babies are most likely to be very weak, fragile and susceptible to being on the wrong side of natural selection and very high weight babies have all sorts of overweight issues plus the fact that they have to literally be squeezed out of the womb through a very narrow canal, really pushing their oversight bodies to its limits very early making them also susceptible to being on the wrong side of natural selection.

The average weight babies on the other hand do not experience any of the issues discussed in the last paragraph, they are usually healthy and thus have a high survival rate. So, it is evident to see that the population of babies stabilizes based on a particular non-extreme trait value (baby weight) through stabilizing selection.

Maximum entropy production principle in physics, chemistry and biology

The tendency of the entropy to a maximum as an isolated system is relaxed to the equilibrium (the second law of thermodynamics) has been known since the mid-19th century. However, independent theoretical and applied studies, which suggested the maximization of the entropy production during nonequilibrium processes (the so-called maximum entropy production principle, MEPP), appeared in the 20th century. Publications on this topic were fragmented and different research teams, which were concerned with this principle, were unaware of studies performed by other scientists. As a result, the recognition and the use of MEPP by a wider circle of researchers were considerably delayed. The objectives of the present review consist in summation and analysis of studies dealing with MEPP. The first part of the review is concerned with the thermodynamic and statistical basis of the principle (including the relationship of MEPP with the second law of thermodynamics and Prigogine's principle). Various existing applications of the principle to analysis of nonequilibrium systems will be discussed in the second part.

Watch the video: Έννοιες (July 2022).


  1. Aenescumb

    Between us talking, try searching for the answer to your question on

  2. Macquarrie

    The blog is super, there would be more like it!

  3. Mayir

    I have a similar situation. Let's discuss.

  4. Kendel

    The jokes aside!

  5. Blaeey

    how cute you say

Write a message