LESSON 1
THE BASIC CONCEPTS OF THERMODYNAMICS. THE FIRST LAW OF THERMODYNAMICS. THERMOCHEMISTRY. DETERMINATION OF THE HEAT EFFECT OF NEUTRALIZATION REACTION.
INTRODUCTION: It is а well known fact that most of the physical changes and chemical changes are accompanied by energy changes. These energy changes may take place in the form of heat, light, work, electrical energy etc. All these forms of energy are convertible into one another and hence are related to each other quantitatively.
The branch of science, which deals with the study of different forms of energy and the quantitative relationships between them is known as thermodynamics.
When we confine our study to chemical changes and chemical substances only, the restricted branch of thermodynamics is known as Chemical Thermodynamics.
The complete study of thermodynamics is based upon three generalizations called First, Second and Third laws of thermodynamics. These laws have been arrived at purely on the basis of human experience and there is no theoretical proof for any of these laws. However the validity of these laws is supported by the fact that nothing contrary to these laws has been found so far and nothing contrary is expected.
The importance of the thermodynamics lies in the following two facts:
(i) It helps us to predict whether any given chemical reaction can occur under the given set of conditions.
(ii) It helps in predicting the extent of reaction before the equilibrium is attained.
The limitations of thermodynamics i.e. where it fails to give any information are as follows:
(i) It helps to predict the feasibility of а process but does not tell anything about the rate at which the process takes place.
(ii) It deals only with the initial and final states of а system but does not tell anything about the mechanism of the process (i.е. the path followed by the process).
(iii) It deals with the properties like temperature, pressure etc. of the matter in bulk but does not tell anything about the individual atoms and molecules.
Some basic terms and concepts commonly used in thermodynamics are briefly explained below:
1. System and Surroundings. The part of the universe chosen for thermodynamic consideration (i.е. to study the effect of temperature, pressure etc.) is called а system.
The remaining portion of the universe, excluding the system, is called surroundings.
А system usually consists of а definite amount of one or more substances and is separated from the surroundings by а real or imaginary boundary through which matter and energy can flow from the system to the surroundings or vice versa.
2. Open, closed and isolated systems.
(а) Open system. А system is said to be an open system if it can exchange both matter and energy with the surroundings. For example, if some water is kept in an open vessel or if some reaction is allowed to take place in an open vessel, exchange of both matter and energy takes place between the system and the surroundings.
Animals and plants are open systems from the thermodynamic point of view.
(b) Closed system. If а system can exchange only energy with the surroundings but not matter, it is called а closed system. For example, if some water is placed in а closed metallic vessel or if some reaction is allowed to take place in а cylinder enclosed by а piston, then as the vessel is closed, no exchange of matter between the system and the surroundings can take place. However, as the vessel has conducting walls, exchange of energy can take place between the system and the surroundings.
In nonrelativistic classical mechanics, a closed system is a physical system which doesn’t exchange any matter with its surroundings, and isn’t subject to any force whose source is external to the system.[1][2] A closed system in classical mechanics would be considered an isolated system in thermodynamics.
In thermodynamics
In thermodynamics, a closed system can exchange energy (as heat or work) but not matter, with its surroundings. An isolated systemcannot exchange any heat, work, or matter with the surroundings, while an open system can exchange all heat, work and matter. For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
where is the number of j-type molecules,
is the number of atoms of element i in molecule j and bi is the total number of atoms of element i in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system.
If the reaction is exothermic, heat is given by the system to the surroundings. If the reaction is endothermic, heat is given by surroundings to the system. Further, if the reaction is accompanied by а decrease in volume, mechanical work is done by the surroundings on the system and if the reaction is accompanied by increase in volume, the mechanical work is done by the system on the surroundings. As mechanical work is also а type of energy, the movement of the piston in or out also amounts to an exchange of energy between the system and the surroundings.
(с) Isolated system. If а system caeither exchange matter nor energy with the surroundings, it is called an isolated system. For example, if water is placed in а vessel which is closed as well as insulated, no exchange of matter or energy can take place between the system and the surroundings. This constitutes an isolated system.
In the natural sciences an isolated system is a physical system without any external exchange – neither matter nor energy can enter or exit, but can only move around inside. Truly isolated systems cannot exist iature, other than allegedly the universe itself, and they are thus hypothetical concepts only.[1][2][3][4] It obeys, in particular, to the first of the conservation laws: its total energy – mass stays constant.
This can be contrasted with a closed system, which can exchange energy with its surroundings but not matter, and with an open system, which can exchange both matter and energy. The only truly isolated system is the universe as a whole[citation needed] because, for example, there is always gravity between a system with mass, and masses elsewhere. Real systems may behave nearly as an isolated system for finite (possibly very long) times.
The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptableidealization used in constructing mathematical models of certaiatural phenomena; e.g., the planets in our solar system, and theproton and electron in a hydrogen atom are often treated as isolated systems. But from time to time, a hydrogen atom will interact withelectromagnetic radiation and go to an excited state.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann’s H-theorem used equationswhich assumed a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt’s paradox. However, if the stochastic behavior of themolecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann’s assumption of molecular chaos can be justified.
Tea placed in thermos flask is an example of an isolated system whereas tea placed in а closed steel tea-pot is an example of а closed system and tea placed in an open cup is an example of an open system.
A dynamical system is a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space. Examples include the mathematical modelsthat describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.
At any given time a dynamical system has a state given by a set of real numbers (a vector) that can be represented by a point in an appropriate state space (a geometrical manifold). Small changes in the state of the system create small changes in the numbers. Theevolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule is deterministic; in other words, for a given time interval only one future state follows from the current state.
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. Once the system can be solved, given an initial point it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of fast computing machines, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
· The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
· The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
· The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
· The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
It was in the work of Poincaré that these dynamical systems themes developed.
Basic definitions
A dynamical system is a manifold M called the phase (or state) space endowed with a family of smooth evolution functions Φt that for any element of t ∈ T, the time, map a point of the phase space back into the phase space. The notion of smoothness changes with applications and the type of manifold. There are several choices for the set T. When T is taken to be the reals, the dynamical system is called a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. When T is taken to be the integers, it is a cascade or a map; and the restriction to the non-negative integers is a semi-cascade.
Examples
The evolution function Φ t is often the solution of a differential equation of motion
The equation gives the time derivative, represented by the dot, of a trajectory x(t) on the phase space starting at some point x0. Thevector field v(x) is a smooth function that at every point of the phase space M provides the velocity vector of the dynamical system at that point. (These vectors are not vectors in the phase space M, but in the tangent space TxM of the point x.) Given a smooth Φ t, an autonomous vector field can be derived from it.
There is no need for higher order derivatives in the equation, nor for time dependence in v(x) because these can be eliminated by considering systems of higher dimensions. Other types of differential equations can be used to define the evolution rule:
is an example of an equation that arises from the modeling of mechanical systems with complicated constraints.
The differential equations determining the evolution function Φ t are often ordinary differential equations: in this case the phase space Mis a finite dimensional manifold. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity.
3. State of а system and State variables. The state of а system means the condition of the system, which is described in terms of certain observable (measurable) properties such as temperature (Т), pressure (P), volume (Ч) etc. of the system. If any of these properties of the system changes, the system is said to be in different state i.е. the state of the system changes. That is why these properties of а system are called state variables.
А process is said (о occur when the state of the system changes. The first and the last state of а system are called the initial state and the final state respectively.
4. State function. А physical quantity is said to be state function if its value depends only upon the state of the system and does not depend upon the path by which this state has been attained. For example, а person standing on the roof of а five stroeyed building (i.е. at а particular height) has а fixed value of potential energy, irrespective of the fact whether the reached there by stairs or by а lift. Thus the potential energy of the person is а state function.
On the other hand, the work done by the legs of the person to reach the same height is not same in the two cases e. whether he went by lift or by stairs. Hence work is not а state function. Instead, it is sometimes called а “path function”. Alternatively, а physical quantity is said to be а state function if the change in its value during the process depends only upon initial state and the final state of the system and does not depend upon the path by which this change has been brought about.
5. Extensive and Intensive properties. The various physical properties of а system may be divided into two main types:
(i) Extensive properties. These are those properties which depend upon the quantity of the matter contained in the system. The common examples of these properties are mass, volume and heat capacity. Besides these, some other properties discussed later in this unit include internal energy, enthalpy, entropy, Gibbs free energy etc. The total value of an extensive property is equal to the sum of the values for the separate parts into which the system may be divided for the sake of convenience.
(ii) Intensive properties. These are those properties which depend only upon the nature of the substance and are independent of the amount of the substance present in the system. The common examples of these properties are temperature, pressure, refractive index, viscosity, density, surface tension, specific heat, freezing point, boiling point, etc. It is because pressure and temperature are intensive properties, independent of the quantity of the matter present in the system that they are frequently used as variables to describe the state of а system.
It is of interest to note that an extensive property may become intensive property by specifying unit amount of the substance concerned. Thus mass and volume are extensive properties but density and specific volume (i.е. mass per unit volume and volume per unit mass respectively) are intensive properties of the substance or the system. Similarly, heat capacity is an extensive property but specific heat is intensive (as will be discussed later).
6. Thermodynamic processes. А thermodynamic process is said to occur when the system changes from one state (initial state) to another (final state). The different processes commonly met within the study of chemical thermodynamics are as follows:
(i) Isothermal process. When а process is carried out in such а manner that the temperature remains constant throughout the process, it is called an isothermal process. Obviously, when such а process occurs, heat can flow from the system to the surroundings and vice versa in order to keep the temperature of the system constant.
(ii) Adiabatic process. When a process is carried out in such а manner that no heat can flow from the system to the surroundings or vice versa i.e the system is completely insulated from the surroundings, it is called an adiabatic process.
(iii) Isochoric process. It is а process during which the volume of the system is kept constant.
(iv) Isobaric process. It is а process during which the pressure of the system is kept constant.
7. Reversible and Irreversible Processes. The various types of processes mentioned above may be carried out reversibly or irreversibly. These terms may be understood as follows:
In order to understand а reversible process, imagine а gas confined within а cylinder provided with а frictionless piston upon which is piled some very fine sand. Suppose the pressure exerted by the gas on the piston is equal to the combined pressure exerted by the weight of the piston, the pile of sand and the atmospheric pressure. Thus under these conditions, the piston does not move at all and а state of equilibrium is said to exist. Now if one practicle of sand is removed, the gas will expand very slightly but the equilibrium will be restored almost immediately. Such а change is called an infinitestimal change. If the particle of sand is replaced, the gas will return to its original volume. By the continued removal of the particles of sand, the gas can be allowed to undergo а finite expansion but each step in this expansion is an infinitesimal one and can be reversed by an infinitesimal change in the external conditions. At all times, the equilibrium is restored immediately.
А process carried out in the above manner is called а reversible process and may be defined as follows:
А reversible process is а process which is carried out infinitestimally slowly so that all changes occurring in the direct process can be exactly reversed and the system remains almost in a state of equilibrium with the surroundings at every stage of the process.
On the other hand, а process which does not meet the above requirements is called an irreversible process. In other words, an is irreversible process is defined as that process which is not carried out infinitesimally slowly (instead, it is carried out rapidly) so that the successive steps of the direct process cannot be retraced and any change in the external conditions disturbs the equilibrium.
8. Some thermodynamic quantities. А number of the thermodynamic quantities appear during the study of thermodynamics e.g. internal energy, heat, work, enthalpy, entropy, free energy etc. The first three terms are briefly described below. The remaining terms will be discussed at appropriate places.
(а) Internal energy. It has already been mentioned that whenever some process (physical or chemical) occurs, it is usually accompanied by some energy change. The energy may appear in different forms such as heat, light, work etc.
The evolution or absorption of energy in different processes clearly shows that every substance (or the system containing one or more substances) must be associated with some definite amount of energy, the actual value of which depends upon the nature of the substance and the conditions of temperature, pressure, volume and composition. It is the sum of different types of energies associated with atoms and molecules such as electronic energy (Еe), nuclear energy (Еn), chemical bond energy (Еc), potential energy (Ер) and kinetic energy (Еk) which is further the sum of translational energy (Еt), vibrational energy (Еv) and rotational energy (Еr). Thus:
Е = Еe + Еn + Ес + Ер + Еk
The energy thus stored within а substance (or а system) is called its internal energy and is usually denoted by the symbol “Е”.
No doubt that every substance (or а system) is associated with а definite amount of internal energy but it is not possible to find its absolute value because it involves certain quantities which cannot be measured.
However, fortunately, it is not required to know the absolute value of internal energy possessed by any required in different processes is simply the change of internal energy when the reactants change into products or when а system changes from initial state to the final state. This is easily measure able and is represented by DЕ.
Further, internal energy is a state function i.е. depends only upon the state of the system (i.е. conditions of temperature, pressure etc.) and is independent of the method by which this state has been attained. For example, one mole of СО2 at 300К and 1 atmospheric pressure will always have the same internal energy irrespective of the fact that it has been brought to these conditions from 500 К and 5 atmospheric pressure or from 1000 К and 10 atmospheric pressure. Thus if the internal energy of а system in the initial state is E1 and in the final state, it is Е, then the change of internal energy (DЕ) may be given by:
DЕ = Е2 – E1
Similarly, in а chemical reaction, if ЕR is the internal energy of the reactants and Е is the internal energy of the products, then energy change accompanying the process would be DЕ = ЕP – ER
Two more important points about the internal energy are as follows:
(i) The internal energy depends upon the quantity of the substance contained in the system. Hence it is an extensive property.
(ii) The internal energy of ideal gases is а function of temperature only. Hence in isothermal processes, as the temperature remains constant, there is no change in internal energy i.e.
DЕ = 0
Sign of DЕ. Obviously, if Е1 > E2 (or ЕR > ЕP), the extra energy possessed by the system in the initial state (or the reactants) would be given out and DЕ will be negative according to the above equations.
Similarly, if Е1 < E2 (or ЕR < ЕP), energy will be absorbed in the process and DЕ will be positive. Hence DЕ is negative if energy is evolved and DЕ is positive if energy is absorbed.
Units of Е. The units of energy are ergs (in CGS units) or joules (in SI units)
1 joule = 107 ergs.
(b) Work. As learnt from lessons in Physics, work is said to have been done whenever the point of application of а force is displaced in the direction of the force. If F is the magnitude of the force and dl is the displacement of the point of application in the direction in which the force acts, then the work done is given by
w = F x dl
The above type of work is called mechanical work. However, there are many other forms of work but in each of these forms:
Work done = [А generalized force] x [А generalized displacement]
Two main types of work used in thermodynamics are briefly described below:
(i) Electrical work. The genera1ised force is the Е.М.F. and the generalized displacement is the quantity of electricity flowing through the circuit. Hence:
Electrical work done = E.М.F. x Quantity of electcity.
This type of work is involved in case of reactions involving ions.
(ii) Work of expansion or pressure-volume work. This type of work is involved in systems consisting of gases. This is the most important form of work used in the study of thermodynamics. It is the work done when the gas expands or contracts against the external pressure (usually, the atmospheric pressure). It is а kind of mechanical work. The expression for such а work may be derived as follows:
Consider а gas enclosed in а cylinder fitted with а frictionless piston.
Suppose:
Area of cross-section of cylinder = a sq. cm.
Pressure on the piston = Р
(which is slightly less than internal pressure of the gas so that the gas can expand)
Distance through which gas expands = dl cm
Then as pressure is force per unit area, force (f) acting on the piston will be
f = P x a
Work done by the gas = Force x Distance = f x dl = P x а x dl.
But a x dl = dV, а small increase is the volume of the gas. Hence the small amount of work (dw) done by the gas can be written as
dw =P x dV
If the external pressure Р against which the gas expands remains almost constant throughout 1е process, the above result may be written as
w = P (V2 – V1) = P DV
where DV = V2 – V1 is the total change in volume of the gas (or the system).
If the external pressure (Р) is slightly more than the pressure of the gas, the gas will contract i.e. the work will be done by the surroundings on the system. However, the same formula will apply for the work done.
It may be mentioned here that Р is the external pressure and hence is sometimes written as Рext so that
w = Р x DV
Sign of w. According to the latest S.I. convention, w is taken as negative if work is done by the system whereas it is taken as positive if work is done on the system. Thus for expansion, we write internal pressure of the gas and the external pressure, similarly heat is another mode of energy exchanged between the system and the surroundings as а result of the difference of temperature between them. It is usually represented by the letter “q”.
Sign of “q”. When heat is given by the system to the surroundings, it is given а negative sign.
When heat is absorbed by the system from the surroundings, it is given а positive sign.
Units of q’. Heat is usually measured in terms of ‘calories’. А calorie is defined as the quantity of heat required to raise the temperature of one gram of water through 10 C (in the vicinity of 150С).
In the SI system, heat is expressed in terms of joules. The two types of units are related to each other as under: 1calorie = 4.4 joules
which means the same thing as:
1 joule = 0.2390 calorie
It may be noted that whereas internal energy is а state function, work and heat are not state functions because their values do not depend merely on the initial and final states but depend upon the path followed.
Difference between heat and work. When heat is supplied to а gas in а system, the molecules start moving faster with greater randomness in different directions. However, when work is done on the system, then initially the molecules start moving down in the direction of the piston. Thus whereas heat is аrandom1огт of energy, work is an organized form of energy
The first law of thermodynamics.
The first law of thermodynamics is simply the law of conservation of energy which states that:
“Energy caeither be created nor destroyed although it may be converted from one form to another” or “The total energy of the universe (i.e. the system and the surroundings) remains constant, although it may undergo transformation from one form to the other.”
Justification for the First Law of Thermodynamics. This law is purely а result of experience. There is no theoretical proof for it. However, some of the following observations support the validity of this law. Whenever а certain quantity of some form of energy disappears, an exactly equivalent amount of some other form of energy must be produced. For example,
(а) In the operation of an electric fan, the electrical energy which is consumed is converted into mechanical work which moves the blades.
(b) The electrical energy supplied to а heater is converted into heat whereas electrical energy passing through the filament of а bulb is converted into light.
(с) Water can be decomposed by an electric current into gaseous hydrogen and oxygen. It is found that 286 2 U of electrical energy is used to decompose 1 mole of water.
Н2O(l) + 286.2 kJ= H2 + ½ O2(g)
Electrical
energy
This energy must have been stored in hydrogen and oxygen since same amount of energy in the form of heat is released when 1 mole of water (liquid) is obtained from gaseous hydrogen and oxygen.
H2 + ½ O2(g) = Н2O(l) + 286.2 kJ
Heat energy
Thus 286.2 kJ of electrical energy which was supplied to the system (substance under observation) has been recovered later as heat energy i.e.
Electrical energy supplied = Heat energy produced
Thus energy is conserved in one form or the other though one form of energy may change into the other form.
(ii) It is impossible to construct a perpetual motion machine i.е. а machine which would produce work continuously without consuming energy.
(iii) There is an exact equivalence between heat and mechanical work i.е. for every 4.184 joules of work done, 1 calorie of heat is produced and vice versa.
The above three observations are also sometimes taken as alternate statements of the first law of thermodynamics.
Mathematical formulation of the first law of thermodynamics (i.е. Relationship between internal energy work and heat).
The internal energy of а system can be increased in two ways:
(i) by supplying heat to the system
(ii) by doing work on the system.
Suppose the initial internal energy of the system = Е1
If it absorbs heat q, its internal energy will become = Е1 + q
If further work w is done on the system, the internal energy will further increase and become = Е1 + q + w. Let us call this final internal energy as Е.2 then
Е2 =E1 + q + w
or Е2 – E1 = q + w
or DЕ = q + w
This equation is the mathematical formulation of the first law of thermodynamics.
If the work done is the work of expansion, then
w = – P DV, where DV is the change in volume and Р is the external pressure, because you can then be
written as
DЕ = q – P DV
or q = DЕ + P DV
Two interesting results follow from the mathematical formulation of the first law of thermodynamics, as under:
(i) Neither q nor w is а state function, yet the quantity q+ w (= DЕ) is а state function (because DЕ is а state function).
(ii) For an ideal gas undergoing an isothermal change, DЕ = 0. Hence q = – w, i.e. the heat absorbed by the system is equal to work done by the system.
Internal energy is а state function — А deduction from the First law of Thermodynamics. Suppose the internal energy of а system under some conditions of temperature, pressure and volume is ЕA (state А). Now suppose the conditions are changed so that the internal energy is EB (state В). Then if internal energy is а state function, the difference DЕ = EB – EA must be same irrespective of the path from А to В. If not, then suppose in going from А to В by path I, the internal energy increases by DЕ but on returning from В to А by path II, internal energy decreases by DЕ’. If DЕ > DЕ’, some energy has been created and if DЕ < DЕ’, some energy has been destroyed though we have returned to the same conditions. This is against the first law of thermodynamics. Hence DЕ must be equal to DЕ’ i.е. internal energy is а state function.
INTERNAL ENERGY CHANGE
The term “internal energy” has already been explained. Another important aspect (definition) of internal energy change follows from the first law of thermodynamics, according to which
q = DЕ + P DV
If the process is carried out at constant volume, DV = 0. The above equation then reduces to the form
DЕ =qv
(v indicating constant volume).
Hence internal energy change is the heat absorbed or evolved at constant volume.
It may be mentioned further that as DE is а state function, therefore qv is also а state function.
The first law of thermodynamics is seen by many as the foundation of the concept of conservation of energy. It basically says that the energy that goes into a system cannot be lost along the way, but has to be used to do something … in this case, either change internal energy or perform work.
Taken in this view, the first law of thermodynamics is one of the most far-reaching scientific concepts ever discovered.
Measurement of Internal energy change. The internal energy change measured experimentally using an apparatus called Bomb calorimeter. It consists of а strong vessel (called ‘bomb’) which can stand high pressures. It is surrounded by а bigger vessel which contains water and is insulated. А thermometer and а stirrer are suspended in it. The procedure consists of the following two steps:
calorimeter.
(i) Combustion of known weight of а compound whose heat of combustion is known. А known wt. of the compound is taken in the platinum cup. Oxygen under high pressure is introduced into the bomb. А current is passed through the filament immersed in the compound. Combustion of the compound takes place. The increase in the temperature of water is noted. From this the heat capacity of the apparatus (i.е. heat absorbed per degree rise of temperature) can be calculated.
(ii) Combustion of known weight of the experimental compound. The experiment is repeated as in step (i)
In the above case, as the reaction is carried out in а closed vessel, therefore heat evolved is the heat of combustion at constant volume and hence is equal to the internal energy change.
The value of DЕ can be calculated using the formula
DЕ = Q х Dt x M/m
where Q = heat capacity of the calorimeter;
Dt = rise in temperature;
m = mass of the substance taken;
М = molecular mass of the substance
ENTHALPY OR HEAT CONTENT
If а process is carried out at constant pressure (as is usually the case, because most of the reactions are studied in vessels open to the atmosphere or if а system consists of а gas confined in а cylinder fitted with а piston, the external pressure acting on the piston is the atmospheric pressure), the work of expansion is given by:
w = – PDV
where DV is the increase in volume and P is the constant pressure.
According to first law of thermodynamics, we know that: q = DE – w
Where q is the heat absorbed by the system, DE is the increase in internal energy of the system and w is the work done by the system.
Unter cocdition of constant pressure, putting w = – PDV and representing the heat absorbed by qp, we get:
Putting these values in equation above, we get:
qp = DE – PDV
Suppose when the system absorbed qp joules of the heat, its internal energy increases form E1 to E2 and V1 to V2.
Than we have:
DE = E2 – E1
and DV = V2 – V1
Putting these values in equation above, we get:
qp = (Е2 – E1) + Р(V2 – V1)
Or qp = (Е2 + РV2) – (E1 + РV1)
Now as Е, Р and V are the functions of state, therefore the quantity Е + PV must also be а state function. The thermodynamic quantity Е + PV is called the heat content or enthalpy of the system and is represented by the symbol “Н” i.е. the enthalpy may be defined mathematically by the equation:
Н=Е + PV
Thus if H2 is the enthalpy of the system in the final state and Н2 is the value in the initial state, then
Н2 + Е2 + PV2
and H1 = E1 + PV1
Putting these values in equation, we get:
qp = Н2 – H1
or qp = DН
where DН = Н2 – H1 is the enthalpy change of the system.
Hence enthalpy change of а system is equal to the heat absorbed or evolved by the system at constant pressure.
It may be remembered that as most of the reactions are carried out at constant pressure (i.е. in the open vessels), the measured value of the heat evolved or absorbed is the enthalpy change.
Further, putting the value of qp from equation, we get:
DН = DE + PDV
Hence the entha1py change accompanying а process may also be defined as the sum of the increase in internal energy of the system and the pressure-volume work done 1.е. the work of expansion.
Physical concept of enthalpy or heat content. In the above discussion, the enthalpy has been defined by the mathematical expression, H=E+PV. Let us try to understand what this quantity really is.
It has been described earlier that every substance or system has some definite energy stored in it, called the internal energy. This energy may be of many kinds.
The energy stored within the substance or the system that is available for conversion into heat is called the heat content or enthalpy of the substance or the system.
Like internal energy, the absolute value of the heat content or enthalpy of а substance or а system cannot be measured and fortunately this is not required also. In the thermodynamic processes, we are concerned only with the changes in enthalpy (DН) which can be easily measured experimentally. Further, it may be mentioned here that as Е and V are extensile properties, therefore the enthalpy is also an extensive property.
Relationship between heat of reaction at constant pressure and that constant volume.
It has already been discussed that
qp = DН and qp = DЕ
It has also been derived already that at constant pressure
DН = DE + PDV
where DV is the change in volume, because you can be rewritten as:
DН = DE + P(V2 – V1) = DE + (PV2 – PV1)
where V1 is the initial volume and V2 is the final volume of the system.
But for ideal gases, РC = nRT so that we have
РV1 = n1RT
And РV2 = n2RT
where n1 is the number of moles of the gaseous reactants and n2 is the number of moles of the gaseous products.
Substituting these values in eqn., we get:
DН = DE + (n2RT – n1RT) = DE + (n2 – n1) RT
or DН = DE + Dng RT
where Dng = n2 – n1 is the difference between the number of moles of the gaseous products and those of the gaseous reactants.
Putting the values of DE from, becomes
qp = qv + Dng RT
Conditions under which qp = qv or DН = DE
(i) When reaction is carried out in а closed vessel so that чо1шпе remains constant i.e. DV = 0
(ii) When reaction involves only solids or liquids or solutions but no gaseous reactant or product. This is because the volume changes of the solids and liquids during а chemical reaction are negligible.
(iii) When reaction involves gaseous reactants and products but their is number of moles are equal (i.е. np= nr) е.g. in the reactions
H2(g) + Cl2(g) = 2НС1(g)
C(s) + O2(g) = CO2 (g)
Thus qp is different from qv only in those reactions which involve gaseous reactants and products and (np)gaseous ¹ (nr)gaseous
Applications of the first law of thermodynamics.
In the calculation of the enthalpies of reactions. It has already been mentioned that enthalpy (Н) is а state function. Hence, the enthalpy change (DН) of а reaction is also а state function i.е. it depends only upon the nature of the initial reactants and that of the final products. This forms the basis of Hess’s law which states as follows:
The total amount of heat evolved or absorbed in a reaction depends only upon the nature of the initial reactants and that of the final products and does not depend upon the path by which this change is brought about. In other words, the total amount of heat evolved or absorbed in a reaction is same whether the reaction takes place in one step or a number of steps.
Before we discuss how Hess’s law can be applied in the calculation of enthalpies of reactions, let us first define а few terms to be used therein.
(i) Enthalpy of reaction. Enthalpy of reaction is defined as the amount of heat evolved or absorbed when the number of moles of the reactants as represented by the balanced equation have completely reacted.
Since its value depends upon the conditions of temperature and pressure, therefore the values are reported under standard conditions which are 1 atm pressure and 298 К. The enthalpy change under these conditions is called the standard enthalpy change and is usually represented by DН0.
(ii) Enthalpy of formation. The standard enthalpy of formation (usually represented by DН0) is defined as the enthalpy change that takes place when one mole of the substance under standard conditions is formed from its constituent elements in their most stable form and in their standard state.
The standard state of an element is the pure element in its stable form or more common form under standard conditions of 1 atm and 298 К. For example, the standard states of oxygen, carbon, mercury and sulphur are oxygen gas, graphite, liquid mercury and rhombic sulphur respectively at 1 atm pressure and 298 К.
The enthalpy of formation of any element in the standard state is taken as ‘zero’. Thus the standard heat of formation of graphite is 0.0 whereas that of diamond is not zero but equal to 1.896 kJ/mol.
(iii) Enthalpy of combustion. The enthalpy of combustion of a substance is defined as the amount of heat evolved when 1 mole of the substance is completely burnt or oxidized.
Calculation of enthalpies of reactions. The enthalpies of reactions are usually calculated from the enthalpies of formation using the following relationship:
DНreaction = S DН0(Products) – S DН0(Reactants);
For elementary substances: DН0(formation) = 0
In using this formula, the standard enthalpies of formation of elements are taken as zero, as already mentioned.
(2) In the calculation of bond energies. We know that energy is evolved when а bond is formed and energy is required for the dissociation of а bond. Hence bond energy is defined as follows:
Bond energy is the amount of energy released when one mole of bonds are formed from the isolated atoms in the gaseous state or the amount of energy required to dissociate one mole of bonds present between the atoms in the gaseous molecules.
For diatomic molecules like Н2, О2, N2, С1, HCl, HF etc., the bond energies are equal to their dissociation energies. For polyatomic molecules, the bond energy of а particular bond is not the same when present in different types of compounds (е.у bond energy of С – Cl is not same in СН3С1, СН2С12, СНСl3, СС14). In fact, the bond energy of а particular type of bond is not same even in the same compound (е.g. in СН4, the bond energy for first, second, third and being С – Н bonds are not equal – their values being + 425, + 470, + 416 and + 335 kJ/mol respectively). Hence in such cases, an average value is taken.
Thus average С – Н bond energy = (425 + 470 + 416 + 335) / 4 = 1646/4 = 411.5kJ/mol
Bond energy usually means bond dissociation energy.
DНreaction = S DН0(Products) – S DН0(Reactants)
Limitations of the first law of thermodynamics – introduction of the second law of thermodynamics.
А major limitation of the first law of thermodynamics is that it merely indicates that in any process there is an exact equivalence between the various forms of energies involved, but it provides no information concerning the spontaneity or feasibility of the process i.e. whether the process is possible or not. For example, the first law does not indicate whether heat can flow from а cold end to а hot end or not. All that it tells is that if this process occurred, the heat energy gained by one end would be exactly equal to that lost By the other end. Similarly, the first law does not tell whether а gas can diffuse from low pressure to high pressure or water can itself run uphill etc.
The answers to the above questions are provided by the second law of thermodynamics. However before we са1се up different statements of the second law of thermodynamics, it is important to know what we understand by а “spontaneous process” and also introduce two more thermodynamic quantities, namely “Entropy” and “Gibbs free energy”.
Spontaneous non-spontaneous processes. To understand what we mean by а spontaneous process, let us consider the following two processes:
1. Dissolution of sugar in water at room temperature.
2. Burning of coal in air or oxygen.
The first process takes place by itself, although it may be slow. The second process cannot take place by itself. It needs initiation i.е. we have to bring а flame near the coal to start its burning. But once it starts burning, it goes on by itself without the help of any external agency. Both the above processes are spontaneous processes. Hence а spontaneous process may be defined as follows:
А process, which under some given conditions may take place by itself or by initiation independent of the rate is called а spontaneous process. In other words, а process, which can take place by itself or has an urge or tendency to take place is called spontaneous process or to sum up, а spontaneous process is simply а process, which is feasible.
It may be noted carefully that а spontaneous process does not mean that the process should be instantaneous. In fact, the rate of the process may very from extremely slow to extremely fast.
For а more clear understanding, а few more examples of the spontaneous processes are as follows:
Examples of processes which tare place by themselves:
(i) Dissolution of common salt in water.
(ii) Evaporation of water in an open vessel.
(iii) Flow of heat from hot end to cold end or from а hot body to а cold body, е.g. cooling down of а cup of tea.
(iv) Flow of water down а hill.
(v) Combination of nitric oxide and oxygen to form nitrogen dioxide.
2 NO (g) + O2 (g) = 2NO2 (g)
(vi) Combination of H2 and I2 to form HI
H2(g) + I2 (g) = 2HI(g)
В. Examples of processes which take place on initiation:
(i) Lighting of а candle involving burning of wax (initiated by ignition),
(ii) Heating of calcium carbonate to give calcium oxide and carbon dioxide (initiated by heat).
СаСО3 (s) = СаО (s) + СО2 (g)
(iii) Combination of hydrogen and oxygen to form water when initiated by passing an electric spark.
H2(g) + ½ O2 (g) = H2O(l)
(iv) Reaction between methane and oxygen to form carbon dioxide and water (initiated by ignition)
СН4 (g) + 2О2(g) = СО2(g) + 2Н2О(l)
On the other hand, а process, which caeither take place by itself nor by initiation is called а non- spontaneous process.
А few examples of the non-spontaneous processes in everyday life are as follows:
(i) Flow of water up а hill.
(ii) Flow of heat from а cold body to а hot body.
(iii) Diffusion of gas from low pressure to а high pressure.
(iv) Dissolution of sand in water.
The driving force for а spontaneous process. This force, which is responsible for the spontaneity of а process is called the driving force.
(1) Tendency for minimum energy. It is а common observation that in order to acquire maximum stability, every system tends to have minimum energy. For example,
(i) А stone lying at а height has а tendency to fall down so as to have minimum potential energy.
(ii) Water flows down а hill to have minimum energy.
(iii) А wound watch spring has tendency to unwind itself to decrease its energy to minimum.
(iv) Heat flows from hot body to cold body so that heat content of the hot body becomes minimum.
Thus all the above processes are spontaneous because they have а tendency to acquire minimum energy.
Again, let us consider the following exothermic reactions, all of which are spontaneous:
(i) H2(g) + ½ O2 (g) = H2O(l); DН0f = – 286. 2kJ/mol
(ii) H2(g) + ½ N2 (g) = NH3(g); DН0f = – 92 4 kJ/mol
(iii) C(s) + O2 (g) = CO2(g); DН0f = – 394 kJ/ mol
All these reactions are accompanied by evolution of heat. In other words, the heat content of the products is less than those of the reactants. Thus, again we may conclude that these reactions are spontaneous because they are accompanied by decrease of energy (i.е. they have а negative value of enthalpy change DН).
Hence one т is у conclude that а tendency to attain minimum energy i.е. а negative value of enthalpy change DН might be responsible for a process or а reaction to be spontaneous or feasible.
Limitations of the criterion for minimum energy. The above criterion fails to explain the following:
1. A number of reactions are known which are endothermic i.e for which DН is positive but still they are spontaneous, е.g
(i) Evaporation of water or melting of ice. It takes place by absorption of heat from the surroundings.
H2O(l) = Н2О(g); DН = + 44 kJ/mol
Н2О (s) = Н2О (1); DН = + 5.86 kJ/mol
(ii) Dissolution of salts like NH4CI, КС1 etc.
NH4Cl(s)+ aq =NH4+ (aq) + Сl(aq); DН = +15.1 kJ/mol
(iii) Decomposition of calcium carbonate on heating
CaCO3 (s) = СаО (s) + СО2 (g); DН = + 177.8 1 kJ/mol
(iv) Decomposition of mercuric oxide on heating
2HgO(s) = 2Hg (1) + O2(g); DН= +90 8 kJ/mol
2. A number of reactions are known for which DН zero but still they are spontaneous, е.g.
СН3СООН(l) + С2Н5ОН(l) = CH3COOC2H5 (1) + Н2О (l)
3. Even those reactions for which DН negative, rarely proceed to completion even though DН remains negative throughout.
4. Reversible reactions also occur. For example, the reaction
H2(g) + I2(g) = 2HI (g) having DН = + negative and the reverse reaction viz.,
2HI (g) = H2(g) + I2(g) having DН = – negative both occur i.е. are spontaneous.
Hence it may be concluded that the energy factor or enthalpy factor (i.е. DН) cannot be the sole criterion for predicting the spontaneity or the feasibility of а process. Thus some other factor must also be involved. This factor is the tendency for maximum randomness, as explained below:
(2) Tendency for maximum randomness. Let us consider а process which is spontaneous but for which DН=0. Since for such а process, energy factor has no role to play, so we shall be able to find out the other factor, which makes the process spontaneous. А simple case of such а process “mixing of two gases” which do not react chemically. Suppose the two gases are enclosed in bulbs А and В connected to each other by а tube and kept separated by а stop-cock. Now if the stopcock is opened, the two gases mix completely.
The gases which were confined to bulbs А and В separately are no longer in order. Thus, а disorder has come in or in other words, the randomness of the system has increased.
Another excellent ехашр1е of а spontaneous process for which DН = 0 is the spreading of a drop of ink а breaker filled with water.
Thus it may be concluded that the second factor, which is responsible for the spontaneity of а process is the tendency to acquire maximum randomness.
This factor helps to explain the spontaneity of the endothermic processes as follows:
(i) Evaporation of water takes place because the gaseous water molecules are more random than the liquid water molecules. In other words, the process is spontaneous because it is accompanied by increase of randomness. Similarly, melting of ice is а spontaneous process because liquid state is more random than the solid state.
(ii) Dissolution of ammonium chloride is spontaneous because in the solid, the ions are fixed but when they go into the aqueous solution, they are free to move about. In other words, the process is accompanied by an increase of randomness.
(iii) Decomposition of solid calcium carbonate is spontaneous because the gaseous CO2 produced is more random than the solid CaCO3.
(iv) Decomposition of solid mercuric oxide is spontaneous because the liquid mercury and the gaseous oxygen formed are more random than the solid HgO.
Limitations of the criterion for maximum randomness. It is important to mention here that just as the energy factor (DН) cannot be the sole criterion for determining the spontaneity of а process, similarly the randomness factor also cannot be the sole criterion for the spontaneity of а process. This is obvious from the fact that if the randomness factor were the only criterion, then the processes like liquefaction of а gas or solidification of а liquid would not have been feasible, since these were accompanied by decrease of randomness.
Thus it may be concluded that the overall tendency for а process to be spontaneous will depend upon both the factors.
Overall tendency as the driving force for а process. As mentioned above, the overall tendency for а process to occur depends upon the resultant of the following two tendencies:
(i) Tendency for minimum energy
(ii) Tendency for maximum randomness.
The resultant о/ the above two tendencies which gives the overall tendency for а process to occur is called the driving force of the process.
To understand the conditions under which the process will be spontaneous or non-spontaneous, let us consider а hypothetical process:
А = В
Suppose ‘Е’ represents the tendency for minimum energy.
‘R’ represents the tendency for maximum randomness.
‘D’ represents the overall tendency (i.e. the driving force) which is the resultant of Е and R.
Then the following different possibilities arise:
Type I. When the net driving force is in the forward direction.
(i) Both E and R favour the forward process. The net driving force is very large and favours the forward process.
(ii) E favours and R opposes but Е > R so that the net driving force, though small, favours the forward process.
(iii) E opposes and R favours but R >E, so that the net driving force is again in the forward direction.
In all the above three cases, since the driving force is in the forward direction, therefore, under any of the above conditions, the process will be spontaneous. Further, it may be noted that whereas processes (i) and (ii) are exothermic, the process (iii) is endothermic.
Type II. When the net driving force is in the backward direction.
(i) Both E and R opposes
(ii)E favours and R opposes but R < E
(iii)E opposes and R favours, but E > R
Type III. When the net driving force is zero.
(iv) E favours and R opposes but E = R
(iv) E opposes and R favours but E = R
In all the above five cases, (Type II and Type III), the process is non-spontaneous. Further whereas processes (i), (iii) and (v) are endothermic, the processes (ii) and (iv) are exothermic.
In the light of the above discussion, let us now explain the spontaneity of а 1еч processes, е.g.
(i) Evaporation of water.
H2O (l) = H2O(g); DН= + 44.0 kJ/mol
In this process, Е opposes (process being endothermic), R favours (because gas is more random than liquid). Since the process is known to be spontaneous, hence R must be greater than Е (R > Е).
(ii) Dissolution of NН4Cl is water.
NH4C1(s) +aq = NH4+ (aq) + Cl–(aq), DН = + 15 1kJ/mol
In this process, Е opposes and R favours. Here again, the spontaneity of the process is explained by suggesting that R > Е.
(iii) Reaction between H2 and O2 to form H2О.
H2(g) + O2(g)= H2О(l); DН = – 286.2kJ/mol
Here, Е favours (the reaction being exothermic) but R opposes (because liquid is less random than the gases). However, since the process is experimentally known to be spontaneous, we must have E > R.
(iv) Decomposition of CaCO3 on heating: CaCO3 (s) = CaO(s) + CO2(g), DН = + 177.8 kJ/mol
Here E opposes but R favours. Thus to explain the spontaneity of the process, we must have R>E.
Entropy is a measure of randomness or disorder of the system.
The greater the randomness, the higher is the entropy. Evidently, for а given substance, the crystalline solid state has the lowest entropy, the gaseous state has the highest entropy and the liquid state has the entropy between the two. It is usually represented by “S”. Like internal energy and enthalpy, it is а state function. The change in its value during а process, called the entropy change (represented by DS) is given by
Entropy change during a process is defined as the amount of heat (q) absorbed isothermally and reversibly (infinitesimally slowly) divided by the absolute temperature (Т) at which the heat is absorbed.
Units of Entropy Change. As DS = q/T and it is an extensive property, therefore the units of entropy change are calories/К.mol (cal/К.mol) in С.G.S. system and
joules/К . mol (J/K . mol) in SI units.
The physical significance of the entropy (as given in the first definition above) may be arrived at on the basis of the fact that many processes which are accompanied by an increase of entropy are also accompanined by an increase of randomness or disorder. For example, melting of ice is accompanied by an increase of entropy. At the same time we know that molecules of H2O in ice have fixed positions but as soon as it changes into liquid, the molecules of Н2О begin to move about freely i.е. the disorder sets ш, or in other words, the randomness increases. Similar results are observed for the vaporisation of а liquid or many such similar processes. Hence it may be concluded that
‘Entropy is a measure of randomness or disorder of the system ‘.
This concept may be further understood with the help of the following examples:
(i) In the college, when all classes are being held, all the students are sitting in their respective class rooms and the disorder is minimum. As soon as the bell goes, the students of different classes core out to go to other rooms and thus get mixed up. In other words, the disorder or the randomness increases.
(ii) In the game of hockey or football, to start with all the players take up some definite positions and are thus said to be in order. As soon as the game starts, the players start running and thus they are said to have an increased randomness which increases further as the game catches more and more momentum.
Entropy changes during phase transformations.
(1) Entropy of fusion. When а solid melts, there is an equilibrium between the solid and the liquid at the melting point. The heat absorbed (qrev) is equal to the latest heat of fusion (DHf)
The entropy of fusion is the change in entropy when 1 mole of а solid substance changes into liquid form at the melting temperature.
Mathematically,
DSfus = Sliq – Ssolid = DHfus / Tm
where: DSfus – entropy of fusion
Sliq – molar entropy of the liquid
Ssolid – molar entropy of the solid
Тm – melting temperature in degrees Kelvin
DHfus – enthalpy of fusion per mole.
Entropy of vaporisation. When а liquid evaporates at the boiling point, there is an equilibrium between the liquid and the vapour. The heat absorbed (qrev) is equal to the latent heat of vaporisation ((DHvap).
Entropy of vaporisation is the entropy change when 1 mole of а liquid changes into vapours at its boiling temperature. Mathematically,
where: DSvop – entropy of vaporisation
Svop – monlar entropy of the vapour
Sliq – molar entropy of the liquid
Тm – boiling temperature in degrees Kelvin
DHfus – enthalpy of vaporisation per mole.
(3) Entropy of sublimation. Sublimation involves an equilibrium between the solid and the vapour.
The entropy of sublimation is the entropy change when 1mole о1 the solid changes into vapour at а particular temperature. Mathematically,
where: DSsublimation– entropy of sublimation
Svop – monlar entropy of the vapour
Ssolid – molar entropy of the solid
DHsublimation – heat of sublimation at the temperature Т in degrees Kelvin.
Entropy changes in processes not involving any phase transformation. It may be noted that entropy increases (i.е. М is positive) not only when а solid melts or sublimes or decomposes to give one or more gases or а liquid evaporates, it also increases when the number of molecules of products is greater than the number of molecules of reactants е.g in the reactions:
2SO3(g) = SO2(g) + O2 (g)
or PCl5(g) = PCl3 (g) + Cl2 (у)
or N2O4(g) = 2NO2 (g)
Spontaneity in terms of entropy change. Consider the following spontaneous processes:
(i) Mixing of the two gases on opening the stopcock.
(ii) Spreading of а drop of ink in а beaker filled with water.
These processes do not involve any exchange of matter and energy with the surroundings. Hence these are isolated systems. Further these processes are accompanied by increase of randomness and hence increase of entropy i.e. for these processes, entropy change (DS) is positive. Hence it may be concluded that ‘for spontaneous processes in the isolated systems, the entropy change is positive’.
Now let us consider the following spontaneous processes:
(i) Cooling down of а cup of tea
(ii) Reaction taking place between а piece of marble (CaCO3) or sodium hydroxide (NaOH) and hydrochloric acid in an open vessel.
These are not isolated systems because they involve exchange of matter and energy with the surroundings. Hence for these processes we have to consider the total entropy change of the system and the surroundings i.e.
DS (total) = DS (system) + DS (surroundings)
However, it can be shown that even in these cases, for the process to be spontaneous, DS (total) must be positive. Hence it can be generalized that
For all spontaneous processes, the total entropy change DS (total) must be positive.
Further in all the spontaneous processes considered above, it may be noted that randomness and hence the entropy keeps on increasing till ultimately an equilibrium is reached e.g. uniform distribution of gases after mixing or uniform distribution of ink in water is а stage of equilibrium. Thus the entropy of the system at equilibrium maximum and there is no further change of entropy i.е. DS = 0. Hence it may be concluded that “for a process in equilibrium, DS = 0”. When the cup of tea cools down to room temperature, there is an equilibrium between the tea and the surrounding air. Arguing in а similar manner, it can be proved that “if DS (total) is negative, the direct process is non-spontaneous whereas the reverse process may be spontaneous”. Combining all the results discussed above, it may be concluded that the criterion for the spontaneity in terms of entropy change is as follows:
(i) If DS is positive, the process is spontaneous.
(ii) If DS is negative, the direct process is non-spontaneous; the reverse process is may be spontaneous.
(iii) If DS is zero, the process is in equilibrium.
In our two preceding chapters, we have seen The Definitions of Entropy, and The Second Law of Thermodynamics. In this 3rd chapter, we will enlarge the discussion to open, or nonequilibrium, systems. You should read both of the prior chapters, before trying to cover this one.
As we saw in the previous chapter, the 2nd law of thermodynamics applies only to isolated systems in thermodynamic equilibrium. There are ways to use the 2nd law, in systems that don’t meet these fundamental criteria, and we will look at those here. But it must be emphasized that you cannot take the 2nd law off the shelf, and apply it “as is”, without regard to the isolated or equilibrium state of the system.
You all have a pretty good idea of what temperature is. You read it from a thermometer, and it tells you whether things are “cold” or “hot”. You recognize what the commoumber mean, and you know that you would be much happier running a marathon of the temperature were 65°F, as opposed to 105°F.
But the temperature you are used to is an average quantity, not in the sense of being “mediocre”, but in the sense of “average” from simple arithmetic. You stick a thermometer into something, and a zillion atoms or molecules run into it, some really fast, some really slow, and most at more or less the same speed. Each one has some kinetic energy (“energy of motion”), and leaves some of that energy behind when it hits the thermometer. The average of all those kinetic energies in collision with the thermometer is the temperature that you measure with a thermometer, and that you can feel on your skin.
If the temperature is the same, everywhere in a system, then we say that the system is in an equilibrium state. Most real systems are always just a little bit out of equilibrium, but we don’t worry about it, and we pretend that they are true equilibrium systems. But if the temperature is remarkably different, from one part of a system to another, then we can’t ignore it, and we have to admit that the system is in a nonequilibrium state.
Entropy is rigorously defined only for systems that are in equilibrium. Just look at the defining equation, from classical thermodynamics, S = Q/T. There can’t be a Tunless there is equilibrium.
If anything can pass into, or out of, a system, we say it is an open system. If only matter can pass into, or out of, a system, but not energy, then we call it a closedsystem. If neither matter nor energy can pass into, or out of, a system, then we call it an isolated system.
We have a definition of the 2nd law from our previous chapter, a standard definition from standard thermodynamics.
Processes in which the entropy of an isolated system would decrease do not occur, or, in every process taking place in an isolated system, the entropy of the system either increases or remains constant
The definition explicitly requires the system in question to be isolated. This is a non trivial observation. If the system were not isolated, then entropy could pour out over the boundary, and the entropy decrease instead of increase.
The 2nd Law in Nonequilibrium Systems
So, with all the stress on equilibrium and isolated, how does one use the 2nd law in systems that don’t measure up? There’s really only one answer: Fake it. In this case, the “fake” is to take your nonequilibrium system, and carve it up (Mathematically, not physically) into smaller subdomains, each of which has a fairly constant temperature throughout. They don’t have to all have the same temperature, they only need to have their own temperature. You treat each subdomain like an “isolated” system, computing all the internal changes in entropy and energy, and then add in any energy and/or entropy that comes across the boundary from any other subdomain that the subdomain in question is in contact with. In practice, this requires one to solve all of the relevant equations, for all of the subdomains, simultaneously (so you don’t lose track of anything important).
The only real trick is to notice that if your system is not isolated, then you have to keep track of all the entropy and energy that goes in or out, along with the strictly internal sources & sinks, for both entropy and energy. Of course, it’s not just the subdomains that count, you also have to handle the outer boundary of the whole system as well. If you can create curcumstances where the outer boundary is impassable, and the system as a whole is isolated, so much the better, but you don’t really need to.
If the outer boundary is impassable, and the system isolated, then you know that the aggregate change in entropy must be 0. If not, just replace 0 with the net entropy change across the system outer boundary, and you know thet system as a whole can’t go beyond those limits.
In this way, you can apply the essential spirit of the 2nd law, even in the case of a system that is neither in equilibrium, nor isolated.
Gibbs free energy. It is that thermodynamic quantity of а system the decrease in whose value during a process is equal to the useful work done by the system.
It is usually denoted by “G” and is defined mathematically by the equation:
where Н is the heat content, Т is the absolute temperature and S is the entropy of the system.
As before, for the isothermal processes, we have
G1 = Н1 – TS1 for the initial state
G2 = Н2 – TS2 for the final state
where DG = G2 – G1 is the change in Gibbs’s free energy of the system
or – Gibbs–Helmoholtz equation.
DН = Н2 – Н1 is the enthalpy change of the system
and DS = S2 – S1 is the entropy change of the system.
Physical significance of Gibbs’ free energy – Energy available for useful work. The relationship between heat absorbed by а system q, the change in its internal energy, DЕ, and the work done by the system is given by the equation of the first law of thermodynamics i.е.
Under constant pressure condition, the expansion work is given by PDV.
(DE + P DV = DH)
For а reversible change taking place at а constant temperature.
or
For а change taking place under conditions of constant temperature and pressure
Substituting this value in eqn. above, we get
Thus free energy change can be taken as а measure of work other than the work of expansion. For most changes the work of expansion cannot be converted to other useful work, whereas the non-expansion work is convertible to useful works.
Hence, the decrease in free energy of the system during any change, DG, is a measure of the useful or net work derived during the change. It may, there- fore, be generalised that the free energy, G, of а system is а measure of is capacity to do useful work. It is а part of the energy of the system which is free for conversion to useful work and is, therefore, called free energy.
It can be shown that the free energy change is the maximum work that can be obtained from а process. Hence we can write:
– DG = wmax
If the work involved is the electrical work as in the case of galvanic cells, then as electical work = nFE, the above relationship is written as:
– DG = nFE
where= number of electrons involved in the cell reaction
Е = EMF of the cell
F = Faraday
If all the reactants and products of the cell reaction are in their standard states i.е. 298К and 1atm pressure, the above relationship is written as:
– DG0 = nFE0
where DG0 = standard free energy change
and Е0 = standard EMF of the cell
Spontaneity in terms of free energy change.
(a) Deriving the criteria from entropy considerations. It has already been explained that the total entropy change for а system which is not isolated from the surroundings is given By
DStotal = DSsystem + DSsurroundings
Consider а process (or а reaction) being carried out at constant temperature and pressure. Suppose the heat is lost by the surroundings and gained by the system. If the heat lost by the surroundings is represented by qp (р indicating that the process is being carried out at constant pressure), then by definition of entropy change
DSsurroundings = – qp/T
(minus sign before qp indicates that the heat is lost by the surroundings).
Further we know that at constant pressure,
Using the symbol DS in place of DSsystem (being implied that DS stands for DS for the system), we can write:
DStotal = DS – DH/T
Multiplying throughout by Т, we get
TDStotal = TDS – DH
But for а change taking place at constant temperature and pressure,
DG = DH – TDS
Because TDStotal = – DG, or – TDStotal = DG
But in terms of total entropy change, it has already been explained that:
(i) If DStotal is positive, the process is spontaneous.
(ii) If DStotal is zero, the process is in equilibrium.
(iii) If DStotal is negative, the direct process is non-spontaneous; the reverse process may be spontaneous.
Putting these results in equitation, it can be concluded that the criteria in terms of free energy change for the spontaneity of the process will be as follows:
(iv) If DG is negative, the process will be spontaneous.
(v) If DG is zero the process is in equilibrium.
(vi) If DG is positive, the direct process is non-spontaneous; the reverse process may be spontaneous.
An important advantage of free energy criteria over the entropy criteria lies in the fact that the former requires free energy change of the system only whereas the latter requires the total entropy change for the system and the surrounings.
In thermodynamics, entropy is commonly associated with the amount of order, disorder, and/orchaos in a thermodynamic system. This stems from Rudolf Clausius 1862 assertion that anythermodynamic processes always “admits to being reduced to the alteration in some way or another of the arrangement of the constituent parts of the working body” and that internal work associated with these alterations is quantified energetically by a measure of “entropy” change, according to the following differential expression:
In the years to follow, Ludwig Boltzmann translated these “alterations” into that of a probabilistic view of order and disorder in gas phase molecular systems.
In recent years, in chemistry textbooks there has been a shift away from using the terms “order” and “disorder” to that of the concept of energy dispersion to describe entropy, among other theories. In the 2002 encyclopedia Encarta, for example, entropy is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium; as well as a measure of the disorder in the system. In the context of entropy, “perfect internal disorder” is synonymous with “equilibrium”, but since that definition is so far different from the usual definition implied iormal speech, the use of the term in science has caused a great deal of confusion and misunderstanding.
Locally, the entropy can be lowered by external action. This applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, and to living organisms. This local decrease in entropy is, however, only possible at the expense of an entropy increase in the surroundings.
Overview
To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy:
· Entropy – a measure of the unavailability of a system’s energy to do work; also a measure of disorder; the higher the entropy the greater the disorder.
· Entropy – a measure of disorder; the higher the entropy the greater the disorder.
· Entropy – in thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy.
· Entropy – a measure of disorder in the universe or of the availability of the energy in a system to do work.
Entropy and disorder also have associations with equilibrium. Technically, entropy, from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium — that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an “ordered” distribution and has zero entropy, while the “disordered” kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value.
In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, “that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives.” In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism.
The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula, S = k ln W, which relates entropy S to the number of possible states W in which a system can be found.As an example, consider a box that is divided into two sections. What is the probability that a certaiumber, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is lower because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, “it is obvious that entropy is a measure of order or, most likely, disorder in the system.” In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that:
“ |
The entropy of the universe tends to a maximum. |
” |
Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the “ordering” process and operation of evolution in relation to Clausius’ most famous version of the second law, which states that the universe is headed towards maximal “disorder”. In the recent 2003 book SYNC – the Emerging Science of Spontaneous Order by Steven Strogatz, for example, we find “Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves.”
The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of “isolated” situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass.
Phase change
Owing to these early developments, the typical example of entropy change ΔS is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect “order” and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered.
From his famous 1896 Lectures on Gas Theory, Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a “rest position”. According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules. According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature:
Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more molar-disordered distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy.
Adiabatic demagnetization
In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered “initial” state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the “final” state:
Entropy “order”/”disorder” considerations in the process of adiabatic demagnetization
The “disorder” and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. ΔS = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy increase? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must decrease by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium.
Difficulties with the term “disorder”
In recent years the long-standing use of term “disorder” to discuss entropy has met with some criticism.
When considered at a microscopic level, the term disorder may quite correctly suggest an increased range of accessible possibilities; but this may result in confusion because, at the macroscopic level of everyday perception, a higher entropy state may appear more homogenous (more even, or more smoothly mixed) apparently in diametric opposition to its description as being “more disordered”. Thus for example there may be dissonance at equilibrium being equated with “perfect internal disorder”; or the mixing of milk in coffee being described as a transition from an ordered state to a disordered state.
It has to be stressed, therefore, that “disorder”, as used in a thermodynamic sense, relates to a full microscopic description of the system, rather than its apparent macroscopic properties. Many popular chemistry textbooks in recent editions increasingly have tended to instead present entropy through the idea of energy dispersal, which is a dominant contribution to entropy in most everyday situations.
(b) Deriving the criteria from Glbbs-Helmholtz equation. According to Gibbs-Helmholtz equation
This equation combines in itself both the factors which decide the spontaneity of а process, namely
(i) the energy factor, DН
(ii) the entropy factor, ТDS
Thus DG is the resultant of the energy factor (i.е. tendency for minimum energy) and the entropy factor (i.е. the tendency for maximum randomness).
Depending upon the signs of DН and ТDS and their relative magnitudes, the following different possibilities arise
I. When both DН and ТDS are negative i.e. energy factor favours the process but randomness factor opposes it. Then:
(i) If DН > ТDS the process is spontaneous and DG is negative.
(ii) If DН < ТDS, the process is non-spontaneous and DG is positive.
(iii) If DН = ТDS, the process is in equilibrium and DG is zero.
II. When both DН and ТDS are positive i.e. energy factor opposes the process but randomness factor favours it. Then:
(i) If DН > ТDS, the process is non-spontaneous and DG is positive.
(ii) If DН < ТDS, the process is spontaneous and DG is negative.
(iii) If DН = ТDS, the process is in equilibrium and DG is zero.
III. When DН is negative but ТDSis positive i.е. energy factor as well as the randomness factor favour the process. The process will be highly spontaneous and DG will be highly negative.
IV. When DН is positive but ТDS is negative i.е. energy 1actor as well as the randomness factor oppose the process. The process will be highly non-spontaneous and DG will be highly positive.
To sum up, the criteria for spontaneity of а process in terms of no is as follows:
(i) If DG is negative the process is spontaneous.
(ii) If DG is zero, the process does not occur or the system is in equilibrium.
(iii) If DG is positive the process does not occur in the forward direction. It may occur in the backward direction.
THE THIRD LAW OF THERMODYNAMICS
It is а well known observation that the entropy of а pure substance increases with increase of temperature and decreases with decrease of temperature. Nernst in 1906 made an important observation about the entropies of perfectly crystalline substances at absolute zero and put forward the following generalization known as the third law of thermodynamics:
The entropy of all perfectly crystalline solids may be taken as zero at the absolute zero of temperature.
Since entropy is а measure of disorder, the above definition may be given molecular interpretation as follows:
At absolute zero, а perfectly crystalline solid has а perfect order of its constituent particles i.e. there is no disorder at all. Hence the absolute entropy is taken as zero.
Application of the third law of thermodynamics. The most important application of the third law of thermodynamics is that it helps in the calculation of the absolute entropies of the substances at room temperature (or at any temperature Т). These determinations are based upon the heat capacity measurements.
Thus whereas absolute values of internal energy and enthalpy cannot be determined, the absolute entropies of the substances can be measured.
Knowing the standard entropies of the different reactants and products involved, the entropy change (DS) of а reaction can be calculated using the equation:
DS0 = Sum of the standard absolute entropies of products – Sum of the standard absolute entropies of reactants; DS0 = SS (Products) – S S (Reactants).
Examples
1. When 0.7022 g of oxalic acid (C2O4H2) is burnt in the calorimeter under the same conditions as Example 6, the temperature increased by 1.602°C. The heat capacity of the calorimeter is 1.238 kJ/K. Calculate dH°comb.
Solution:
The balanced equation and various quantities calculated are given in a logical order below:
C2O4H2(s) + 0.5 O2(g) -> 2 CO2 (g) + H2O(l)
dn = 1.5 q = C dT = 1.238*1.602 = 1.984 kJ
n of oxalic acid = 0.7022/90 = 0.00780 mol
dE = -1.984 / 0.00780 = -354.4 kJ/mol
dH = dE + dn R T
= -254.4 kJ + 1.5 mol * 0.008314 kJ/(K mol)* 298 K
= -250.6 kJ/mol
2. Calculate the enthalpy of formation of oxalic acid, for which the enthalpy of combustion is -251 kJ/mol.
Solution:
The following data were looked up from thermodynamic data,
dH°f (CO2(g)) = -393.5 kJ/mol
dH°f (H2O(l)) = -285.8 kJ/mol
Since
C2O4H2(s) + 0.5 O2(g) -> 2 CO2 (g) + H2O(l)
dH°comb = 2 dH°f(CO2(g)) + dH°f(H2O(l)) – dH°f(oxalic acid)
-250.6 = 2 * (-393.5) + (-285.8) – dH°f(oxalic acid)
Therefore
dH°f(oxalic acid) = 2 * (-393.5) + (-285.8) – (-250.6)
= -822.2 kJ/mol
Question 1:A gas is contained in a vertical, frictionless piston cylinder device. The piston has a mass of 20 kg with a cross-sectional area of 20 cm2 and is pulled with a force of 100N. If the atmospheric pressure is 100 kPa, determine the pressure inside.
Solution 1:
Total forces on the piston
Weight of the piston acting downward=200N
Force due to Atmospheric pressure downward =100*103*20*10-4 N=200N
Let P be the Pressure of the gas
Then Force acting due to pressure of gas on the piston upward=PA
Force with the piston is being pulled upward=100N
Total Upward force=Total Downward force
PA + 100=200 +200
20*10-4*P=300
P=150kPa
Question 2:A piston-cylinder device has a ring to limit the expansion stroke. Initially the mass of Oxygen is 2 kg at 500 kPa, 30° C. Heat is now transferred until the piston touches the stop, at which point the volume is twice the original volume. More heat is transferred until the pressure inside also doubles. Determine the amount of heat transfer and the final temperature.
Solution 2:
Since O2 is a diaatomic gas
CP=7R/2,CV=5R/2
Molecular mass=32 gm
Intial state
Intial volume of gas
V=nRT/P
n=2*103/32
T=303K
R=8.3
P=500*103 N/m2
so V=.314 m3
Second state when At top volume becomes double
Volume=2*.314 =.628m3
Pressure=500*103 N/m2 remains same
So as per ideal gas equation
T=606K
Workdone by the gas=PV=500*103*.314=157*103 J
Heat transfered=nCP(T2 -T1)
=2*103 *7*8.3*303/32*2
=550*103 J
Third state when Pressure becomes double
P=1000*103 N/m2
V=.628m3
As per ideal gas equation
Temperature=1212K
Heat transfered=nCV(T3 -T2)
=2*103 *5*8.3*606/32*2
=785 kJ
So total heat tranfered=550+785=1335 kJ
Final Temperature=1216 K
Question 3: A piston cylinder device contains 1 kg of oxygen at 150 kPa, 30°C. The cross-sectional area of the piston is 0.1 m2. Heat is now added causing gas to expand. When the volume reaches 0.2 m3, the piston reaches a linear spring with a spring constant of 120 kN/m. More heat is added until the piston rises another 25 cm. Determine the final pressure, temperature and the energy transfers.
Solution 3:
Since O2 is a diaatomic gas
CP=7R/2,CV=5R/2
Molecular mass=32 gm
Intial state
Intial volume of gas
V=nRT/P
n=1*103/32
T=303K
R=8.3
P0=500*103 N/m2
so V=.157 m3
Second state ( when piston reaches the spring)
v=.2 m3
P=500*103 N/m2
As per ideal gas equation
Temperature becomes=385.5 K
So heat tranfer till that point=nCP(T2 -T1)
=1*103 *7*8.3*82.5/32*2
=74.8 KJ
Third state ( when it compresses the spring)V=.2+0.1*0.25=0.225 m3
P=500*103 + kx/A
=500*103 + 120*103*25*10-2/.1
=800*103 N/m2
As per ideal equation
Temperature becomes=694K
Change in Internal energy=nCV(T3 -T2)=5nR(T3 -T2)/2
=5*1*103*8.3*308.5/32*2
=200 KJ
Workdone by the gas =P0Ax + kx2/2
=500*103*.1*.25 + 120*103*.25*.25/2
=16.25 KJ
Total heat supplied in this Process=200+16.25=216.25KJ
So net Heat transfer=291.05 KJ
Question 4:Is the energy of the system U an intensive or extensive variable?
Solution 4:
Doubling the system should double the energy, so U is an extensive variable.
Question 5:Suppose we have M+N systems prepared, and the first is in thermal equilibrium with the second, the second in thermal equilibrium with the third, etc., until system M+N- 1 is in thermal equilibrium with the M+Nth system. Is the first system in thermal equilibrium with the M+Nth?
Solution:5
By repeated application of the Zeroth Law, we can state that all M+N systems are in thermal equilibrium with each other.
Question 6:A gas is contained in a cylinder with a moveable piston on which a heavy block is placed. Suppose the region outside the chamber is evacuated and the total mass of the block and the movable piston is 102 kg. When 2140 J of heat flows into the gas, the internal energy of the gas increases by 1580 J. What is the distance s through which the piston rises?
Solution:6
Total heat supplied =Workdone + Change in internal energy
So work done=2140-1580=560 J
Let s be the distance moved then
the workdone is given by =Fs
Fs=560
s=560/F
=560/102*10
s=.54 m
Question 7:A resistance thermometer is such that resistance varies with temperature as
RT=R0(1+aT+bT5)
where T represent Temperature on Celsius scale And a,b,R0 are constants.R0 unit is ohm
Based on above data ,Find out the unit of a,b
Solution:7
As per dimension analysis Unit on both sides should be equal
Now since R & R0 both unit are same
Quantity 1+aT+bT5 should be dimension less
so at should be dimension less
so a unit is C-1
similarly bt5 should be dimensionless
so b unit is C-5
LITERATURE:
1.The abstract of the lecture.
2. intranet.tdmu.edu.ua/auth.php
3. Atkins P.W. Physical chemistry. – New York. – 1994. – P.299-307.
4. Cotton, F. A., Chemical Applications of Group Theory, John Wiley & Sons: New York, 1990
5.Girolami, G. S.; Rauchfuss, T. B. and Angelici, R. J., Synthesis and Technique in Inorganic Chemistry, University Science Books: Mill Valley, CA, 1999
6.John B.Russell. General chemistry. New York.1992. – P. 550-599
7. Lawrence D. Didona. Analytical chemistry. – 1992: New York. – P. 218 – 224.
Prepared by PhD Falfushynska H.