Wednesday, April 28, 2010

Nanotechnology

As we are facing the uncertainly in supply of crude oil, as well as affluent prices, other fuel source is a happening and hot topic. An interesting option could be ethanol, now made out of plants like corn and sugar cane. Companies and universities are eagerly working to grow this process of making ethanol from many other kinds of plant substance; that might considerably augment the amount of ethanol accessible as fuel. Nanotechnology might be to assist this important effort.
Presently ethanol that is used in fuel in the United States is made out of corn especially. The starch in the corn kernels is rehabilitated to sugar using enzymes. This starch is further fermented to shape up ethanol. Any how, in order to make a necessary reduction in the United States consummation of crude oil, we require up that production by a long way. The goal prepared recently by the United States government is to make 35 billion gallons of ethanol a year within the next ten years.

Researchers at Michigan State University are trying nanotechnology in a neat trick. They are heritably engineering corn to comprise the required enzyme. The plan is to make the enzyme unmoving until activated by high temperatures. When the cellulous part of the corn, like stalk, is procedures, the high giving out temperatures might set in motion the enzyme and change the cellulous to starch. This would avoid the added cost of creation the enzyme separately.

Researchers at the University of Rochester are as well studying how bacteria select an exacting enzyme, or enzymes, to break at specific kind of plant or other bio mass. They expect to make enzymes, which could change cellulous to ethanol in one step, other than the two steps used by the accessible processes. The advantage of cars that could be filled up with either fuel or ethanol has been verified in Brazil, they use much of its sugar cane crop to make ethanol. Using nanotechnology / genetic engineering to make ethanol from cellulous has the latent to make a serious dent in our use of crude oil. However we do require keeping an eye on some safety issues.

sharmkan@gmail.com

Detail...

The Axiomatic Approach to Design

The creative process of mapping the FRs in the functional domain to DPs in the physical domain is not unique; the solution varies with a designer’s knowledge base and creative capacity. As a consequence, solution alternatives may vary in their effectiveness to meet the customer’s needs. The axiomatic approach to design is based on the premise that there are generalizable principles that form the basis for distinguishing between good and bad designs.
Suh (1990) identi ed two design axioms by abstracting common elements from a body of good designs, including products, processes, and systems. The rst axiom is called the Independence Axiom.
It states that the independence of functional requirements (FRs) must be always maintained, where FRs are de ned as the minimum set of independent functional requirements that characterize the design goals.The second axiom is called the Information Axiom, which states that among those designs that satisfy the Independence Axiom the design that has the highest probability of success is the best design. During the mapping process (for example, mapping from FRs in the functional domain to DPs in the physical domain), the designer should make correct design decisions using the Independence Axiom. When several designs that satisfy the Independence Axiom are available, the Information Axiom can be used to select the best design.
Axioms are general principles or self-evident truths that cannot be derived or proven to be true;
however they can be refuted by counterexamples or exceptions. Through axioms such as Newton’s laws and the laws of thermodynamics, the concepts of force, energy, and entropy have been de ned. One of the main reasons for pursuing an axiomatic approach to design is the generalizability of axioms, which leads to the derivation of corollaries and theorems. These theorems and corollaries can be used as design rules that precisely prescribe the bounds of their validity because they are based on axioms. The following corollaries are presented in Suh (1990).
Corollary 1:
(Decoupling of Coupled Designs)
Decouple or separate parts or aspects of a solution if FRs are coupled or become interdependent in
the designs proposed.
Corollary 2:
(Minimization of (FRs)
Minimize the number of FRs and constraints.
Corollary 3:
(Integration of Physical Parts)
Integrate design features in a single physical part if FRs can be independently satis ed in the proposed
solution.
Corollary 4:
(Use of Standardization)
Use standardized or interchangeable parts if the use of these parts is consistent with FRs and
constraints.
Corollary 5:
(Use of Symmetry)
Use symmetrical shapes and/or components if they are consistent with the FRs and constraints.
Corollary 6:
(Largest Tolerance)
Specify the largest allowable tolerance in stating FRs.
Corollary 7:
(Uncoupled Design with Less Information)
Seek an uncoupled design that requires less information than coupled designs in satisfying a set of FRs.
The ultimate goal of axiomatic design is to establish a science base for design and improve design
activities by providing the designer with a theoretical foundation based on logical and rational thoughtprocesses and tools.

Nam P

Detail...

The Energy Control Center

The following criteria govern the operation of an electric power
system:
• Safety
• Quality
• Reliability
• Economy
The first criterion is the most important consideration and aims to ensure the safety of personnel, environment, and property in every aspect of system operations. Quality is defined in terms of variables, such as frequency and voltage, that must conform to certain standards to accommodate the requirements for proper operation of all loads connected to the system.
Reliability of supply does not have to mean a constant supply of power, but it means that any break in the supply of power is one that is agreed to and tolerated by both supplier and consumer of electric power. Making the generation cost and losses at a minimum motivates the economy criterion while mitigating the adverse impact of power system operation on the environment.
Within an operating power system, the following tasks are performed in order to meet the preceding criteria:
• Maintain the balance between load and generation.
• Maintain the reactive power balance in order to control the voltage
profile.
• Maintain an optimum generation schedule to control the cost and
environmental impact of the power generation.
• Ensure the security of the network against credible contingencies.
This requires protecting the network against reasonable failure of equipment or outages. The fact that the state of the power network is ever changing because loads and networks configuration change, makes operating the system difficult. Moreover, the response of many power network apparatus is not instantaneous. For example, the startup of a thermal generating unit takes a few hours. This essentially makes it not possible to implement normal feed-forward control. Decisions will have to be made on the basis of predicted future states of the
system. Several trends have increased the need for computer-based operator
support in interconnected power systems. Economy energy transactions, reliance
on external sources of capacity, and competition for transmission resources have all resulted in higher loading of the transmission system. Transmission lines bring large quantities of bulk power. But increasingly, these same circuits are being used for other purposes as well: to permit sharing surplus generating capacity between adjacent utility systems, to ship large blocks of power from low-energy-cost areas to high-energy cost areas, and to provide emergency
reserves in the event of weather-related outages. Although such transfers have helped to keep electricity rates lower, they have also added greatly to the burden on transmission facilities and increased the reliance on control. Heavier loading of tie-lines which were originally built to improve reliability, and were not intended for normal use at heavy loading levels, has
increased interdependence among neighboring utilities. With greater emphasis on economy, there has been an increased use of large economic generating units. This has also affected reliability. As a result of these trends, systems are now operated much closer to security limits (thermal, voltage and stability). On some systems, transmission links are being operated at or near limits 24 hours a day. The implications are:
The trends have adversely affected system dynamic performance.
A power network stressed by heavy loading has a substantially different response to disturbances from that of a non-stressed system.
• The potential size and effect of contingencies has increased dramatically. When a power system is operated closer to the limit, a relatively small disturbance may cause a system upset. The situation is further complicated by the fact that the largest size contingency is increasing. Thus, to support operating functions many more scenarios must be anticipated and analyzed. In
addition, bigger areas of the interconnected system may be affected by a disturbance.
• Where adequate bulk power system facilities are not available, special controls are employed to maintain system integrity. Overall, systems are more complex to analyze to ensure reliability
and security.
Some scenarios encountered cannot be anticipated ahead of time. Since they cannot be analyzed off-line, operating guidelines for these conditions may not be available, and the system operator may have to “improvise” to deal with them (and often does). As a result, there is an ever increasing need for mechanisms to support dispatchers in the decision making process. Indeed, there is a risk of human operators being unable to manage certain functions
unless their awareness and understanding of the network state is enhanced.
To automate the operation of an electric power system electric utilities
rely on a highly sophisticated integrated system for monitoring and control.

Such a system has a multi-tier structure with many levels of elements. The bottom tier (level 0) is the high-reliability switchgear, which includes facilities for remote monitoring and control. This level also includes automatic equipment such as protective relays and automatic transformer tap-changers. Tier 1 consists of telecontrol cabinets mounted locally to the switchgear, and provides facilities for actuator control, interlocking, and voltage and current
measurement. At tier 2, is the data concentrators/master remote terminal unit which typically includes a man/machine interface giving the operator access to data produced by the lower tier equipment. The top tier (level 3) is the supervisory control and data acquisition (SCADA) system. The SCADA system accepts telemetered values and displays them in a meaningful way to operators, usually via a one-line mimic diagram. The other main component of a SCADA
system is an alarm management subsystem that automatically monitors all the inputs and informs the operators of abnormal conditions. Two control centers are normally implemented in an electric utility, one for the operation of the generation-transmission system, and the other for
the operation of the distribution system. We refer to the former as the energy management system (EMS), while the latter is referred to as the distribution management system (DMS). The two systems are intended to help the dispatchers in better monitoring and control of the power system. The simplest of such systems perform data acquisition and supervisory control, but many also have sophisticated power application functions available to assist the operator.
Since the early sixties, electric utilities have been monitoring and controlling their power networks via SCADA, EMS, and DMS. These systems provide the “smarts” needed for optimization, security, and accounting, and indeed are really formidable entities. Today’s EMS software captures and archives live data and records information especially during emergencies and system disturbances. An energy control center represents a large investment by the power
system ownership. Major benefits flowing from the introduction of this system include more reliable system operation and improved efficiency of usage of generation resources. In addition, power system operators are offered more in-depth information quickly. It has been suggested that at Houston Lighting & Power Co., system dispatchers’ use of network application functions (such as Power Flow, Optimal Power Flow, and Security Analysis) has resulted in considerable economic and intangible benefits. A specific example of $ 70,000 in savings achieved through avoiding field crew overtime cost, and by leaving equipment out of service overnight is reported for 1993. This is part of a total of $ 340,000 savings in addition to increased system safety, security and reliability has been achieved through regular and extensive use of just some network analysis functions

Press LLC

Detail...

Iinduction and Fractional

We will then discuss motors of the fractional-horsepower class used for applications requiring low power output, small size, and reliability. Standard ratings for this class range from 2 0
1 to 1 hp. Motors 1 hp are called subfractional-horsepower motors and are rated for less than 20 rated in millihorsepower and range from 1 to 35 mhp. These small motors provide power for all types of equipment in the home, office, and commercial installations. The majority are of the induction-motor type and operate from a single-phase supply.

The induction motor is characterized by simplicity, reliability, and low cost, combined with reasonable overload capacity, minimal service requirements, and good efficiency. An induction motor utilizes alternating current supplied to the stator directly. The rotor receives power by induction effects. The stator windings of an induction motor are similar to those of the synchronous machine. The rotor may be one of two types. In the wound rotor motor, windings similar to those of the stator are employed with terminals connected to insulated slip rings mounted on the shaft. The rotor terminals are made available through carbon brushes bearing on the slip rings. The second type is called the squirrel-cage rotor, where the windings are simply conducting bars embedded in the rotor and short-circuited at each end by conducting end
rings.

LLC Press

Detail...

Heat Radiation

In the previous sections, we have discussed the transfer of heat through conduction and convection, the two processes requiring presence of a medium. The means by which energy is transmitted between bodies without contact and in the absence of intervening medium is known as radiation. Transmission of energy through radio waves, visible light, X-rays, cosmic rays, etc., all belong to this category, having Heat Radiation different frequencies in the spectrum of electromagnetic radiation.
Here we are concerned with the type of radiation which is principally dependent on the temperature of the body, known as thermal radiation and belonging mostly to the infrared and to a small extent to the visible portion of the electromagnetic radiation spectrum. The heat transferred into or out of an object by thermal radiation is a function of several components. These include its surface re ectivity, emissivity, surface area, temper-
ature and geometric orientation with respect to other thermally participating objects.
In turn, an object’s surface re ectivity and emissivity is a function of its surface conditions (roughness, nish, etc.) and composition.
To account for a body’s outgoing radiation (or its emissive power, de ned as the heat ux per unit time), one makes a comparison to a perfect body, which absorbs the entire amount of heat radiation falling on its surface as well as emits the maximum possible thermal radiation at any given temperature. Such an object is known as a black body. The concept of black body is important in understanding the radiation of heat. According to Stefan–Boltzmann’s law, heat emitted by a black body at any given temperature, qb (W m 2 ), is expressed as follows for a unit area in a unit time:

qb ¼ sT4
where qb is the heat ow through radiation from the surface of a black body, T the temperature, and s a constant known as the Stefan–Boltzmann constant, with a theoretical value of 5.67 10 8 Wm 2 K 4 . Because no material ideally ful lls the properties of absorption and emission of the theoretically de ned black body, for practical purposes a new constant of emissivity, e, is de ned for real surfaces as


¼ q

qbq being the radiant heat from a real surface



Detail...

Heat Convection

Within uids, the heat transfer takes place through a combination of molecular conduction and energy transportation created by the motion of uid particles. This mode of heat transfer is known as convection. The heat exchange rate in uids by convection is much higher than the heat exchange rate in solids through conduction. This difference becomes more prominent in geothermics because rocks have very low-thermal conductivities compared to metals and other solids. Convection processes inside the Earth can be of two broad types: free and forced.

Free or natural convection refers to the free motion of a uid and is solely due to differences in the densities of the heated and cold particles of a uid. The origin and intensity of free convection are solely determined by the thermal conditions of the process and depend on the kind of uid, temperature, potential and volume of the space in which the process takes place. Forced convection occurs under the in uence of some external force. Flow of water in hot springs and heat transport due to volcanic eruptions are examples of forced convection (advection). Forced convection depends on the physical properties of the uid, its temperature, ow velocity, shape
and size of the passage in which forced convection of uid occurs. Genera lly speaking, forced convection may be accompanied by free convection, and the relative in uence of the latter increases with the difference in the temperatures of individual particles of the uid and decreases with the velocity of the forced ow. The in uence of na tural convection is negligible at high- ow velocity.
In problems dealing with the transmission of heat through the process of convection, the uid under consideration is usually bounded on one or more sides by a solid. Let at any given time, Ts be the temperature of the solid at its boundary with the uid and TN the uid temperature at a far-off yet unspeci ed point. In accordance with Newton’s law of cooling, the amount of heat owing would be proportional to the temperature difference and could be expressed as
q ¼ hðTs T1 Þ
where h is the heat transfer coef cient. The heat is transferred by convection and consequently the heat transfer coef cient depends, in general, upon the thermal boundary condition at the solid– uid boundary. However, under many situations, hcan be estimated satisfactorily when the uid dynamics of the ow system is known.

Detail...

Heat Conduction

Thermal conduction takes place by the transfer of kinetic energy of molecules or atoms of a warmer body to those of a colder body. The transfer of kinetic energy takes place through movement of the valence electrons (also called conduction electrons) in an atom, a process analogous to electrical conduction. This type of conduction can take place in both solids and uids.
Inside the Earth, however, conduction of heat takes place mainly through poorly conducting solid rocks constituting the crust and the mantle, which are comprised of minerals having a very few conduction electrons. Another type of conduction, called lattice or phonon conduction, caused by lattice vibrations in the rocks, is primarily responsible for heat transfer in such cases. Detailed treatment of heat conduction is provided in several textbooks (e.g., Carslaw and Jaeger, 1959; Jacob, 1964); applications of heat conduction to problems in geothermics have been dealt by Kappelmeyer and Haenel (1974), Lachenbruch and Sass (1977), Haenel et al. (1988) and others. In this section we shall discuss some basic concepts, which are useful in understanding the heat ow and temperature distribution inside the Earth.
Fourier’s Equation of Heat Conduction
When a temperature gradient exists within a body, heat energy will ow from the region of high temperature to the region of low temperature. This phenomenon is known as conductive heat transfer, and is described by Fourier’s equation, ~q ¼ k ~rT
ð3:6Þ
where ~q is the ow of heat per unit area per unit time (called as heat ow), k the thermal conductivity of the body (assumed isotropic) and ~ rT is the temperature gradient. The negative sign appears because heat ows in the direction of decreasingtemperature.


Gupta

Detail...

Temperature, Heat, and Its Storage

Temperature of an object can be described as the property, which determines the sensation of hotness or coldness felt from contact with it. More unambiguously, using the Zeroth law of thermodynamics, temperature of a system is de ned as the property that determines whether or not that system is in thermal equilibrium with any other system with which it is put in thermal contact (Finn, 1993). When two or more systems are in thermal equilibrium, they are said to have the same temperature.
Temperature is most commonly measured in the Celsius (1C), Fahrenheit (1F) and Kelvin (K) scales. The rst two scales are based on the melting point of ice and the boiling point of water. In the Kelvin scale, the limiting low temperature, called the absolute zero, is taken as the zero of the scale, and the triple point of water—where the ice, water and water vapor phases can co-exist in equilibrium, is equal to273.16 K.


Gupta

Detail...

Melting Points

Melting points are important for determining the purity of solid products. A small amount of sample is packed into the closed end of a capillary tube with a wire or small glass rod. It is then attached to a thermometer, keeping the sample next to the bulb. Next submerge into oil filled tube, keeping setup in the middle of tube (do not touch the sides or bottom). Watch for temperature at which solid sample melts.

Detail...

Drying

Remember the ethyl alcohol — water azeotrope? You might be thinking: If I cannot distill the water out and I want my alcohol anhydrous (dry), because the water will kill my yield, what should I do? You need to dry. Sometimes you will have to dry reagents, sometimes solvents, and
sometimes the products themselves.

Baths. Baths can dry many solid substances that do not decompose under heat. Some substances
can take more heat than others so a thermometer must be used along with the knowledge of how much heat can be safely used without destroying the product, or changing it into a different
substance. The types of baths are many: water, air, toluene, sand, oil, and graphite, but they all
have the same general rules. Hot plates and heating mantles must follow these rules also.
1. Always protect the substance you are drying from the water in the atmosphere by fitting a drying tube into the glassware that is holding your substance. The drying tube should be filled with a suitable drying agent.
2. If using a liquid, never allow it to boil.
3. Never use excessive heat for drying. I have heard of nitro propene burning faster than gunpowder due to excessive heat. Personally, I feel this could have been caused by a nearby pilot light that was left burning.
Solids can also be dried at room temperature on filter paper or porous tile. You should protect
the substance from dirt and dust by covering with filter paper or a funnel. A vacuum desiccator
will greatly speed up the drying process, and should be used on products that are destroyed by
the small amount of water in the atmosphere. Drying of Liquids. Liquids are usually dried by filtering through or mixing with a solid dehydrating agent. The most common solid drying agents are: calcium chloride, sodium hydroxide, caustic potash, anhydrous sodium sulphate, anhydrous potassium carbonate, anhydrous cupric sulphate, phosphorus pentoxide, and metallic sodium. Now for the bad news, it is essential that the drying agent have no action on the liquid or any substance that may be in the liquid. Great care should be used in the choice of a drying agent, and much research may be required. If you do not find the necessary information call a chemist or some one who knows. I will mention a few rules.
1. Never use calcium chloride to dry alcohols or amines.
2. Never use caustic potash or caustic soda to dry acids, phenols, esters, certain halides, etc.
3. Always use a very small amount of drying agent, otherwise you will lose product by excessive absorption. It is better to use several small amounts than to use one large excessive amount. A useful agent called Blue Drierite can be mixed with the cheaper White Drierite and visually inspected to determine if its absorbing powers are used up. Blue Drierite turns pink when it has no more absorbent power. If you use Blue Drierite directly, you take a chance of contaminating your product with a cobalt, as it was made for use in drying tubes.
4. To dry a moist solid it is often convenient to dissolve it in ether and dry this ethereal solution
with the proper drying agent. Evaporate to retrieve the solid.

Vogel

Detail...

Extracting and Washing

Some people find these two important operations complex and confusing, when they are actually quite simple. You extract good substance from impure mixtures. You wash impurities from good material.

Solid — Liquid Extracting. This is not done too often, but if you have ever made tea or coffee you should be able to do this, as it is basically the same thing.
Liquid — Liquid Extracting. This requires a separatory funnel and two liquids (solutions) that must be insoluble in each other. The liquids must form two layers in the funnel or washing or extracting cannot be performed. Solids (crystals, etc.) need to be dissolved in a solvent, and that
solvent must be insoluble in the extracting or washing liquid. Never throw away any layer until
you are sure that it does not contain product. Using The Funnel. Add the liquid to be extracted or washed to your funnel; if you forgot to close the valve your liquid is now on your shoes. Add the extractor or washer carefully to the mixture. Install the funnel stopper and invert so that the stem points to the roof; make sure one of your hands is holding the stopper securely inward. Most of these liquids fizz when mixed with the extractor, creating pressures that must be bled off through the valve as follows. Swirl or shake once very gently while still pointing the stem at the roof, then open the valve to bleed or "burp" the pressure. Close the valve and shake twice, then burp the funnel. Keep increasing the shaking between burps until you can shake the living hell out of the mixture for long periods, as this is the type of agitation necessary to extract or wash.

Detail...

The Soxhlet Extraction

This apparatus is not totally necessary when called for in a formula, but for the modest price of the apparatus, or the little bit of work with which a homemade unit can be constructed, it is worth carrying out the formula with such a device. Also, yields are improved considerably, sometimes paying for the apparatus with the first formula completed. The principle is basically the same as any coffee pot; a paper thimble is filled with the substance to be extracted (F) and a loose plug of cotton is placed (E) over the top. The Soxhlet apparatus is attached to a flask
containing the proper solvent (if the solvent is not given in the formula, then usually you must find a solvent in that either the desired substance or the impurities are insoluble in). Attach a condenser to the Soxhlet tube (B). The solvent is boiled causing vapor to rise and pass through
the holes (C) into the condenser where it is turned back into liquid. The liquid drops down into the thimble and solvent. When the solvent level exceeds the top of the riser tube (D) the solvent overflows back into the boiling flask (G) and the process is recycled or continuous. You should use a minimum amount of solvent, and if necessary add more through the condenser (do not use too much and do not let the flask (G) become dry at any time). When the extraction is complete,
dismantle the apparatus and crystallize the substance from the solution in the flask, or separate the resulting oil, etc. This is the most efficient way to get myristicin from nutmeg.Vogel.

Detail...

Distillation

div>There are four types of distillation processes; find the one that suits your needs and record or
memorize the operation.
Class 1: Simple distillation. Separating liquids that boil below 150°C at one atmosphere (1 atm) from non-volatile impurities or another liquid boiling at least 25°C higher than the first liquid. Note: the liquids to be distilled must be mixable with each other. If they are not then they would form separable layers, which you separate much more easily with a separatory funnel.

Class 2: Vacuum distillation. Separating liquids that boil above 150°C at 1 atm from non-volatile impurities or another volatile liquid that boils at least 25°C higher than the first liquid. Boiling points can be found in the Merck Index.
Class 3: Fractional distillation. Separating mixable liquid mixtures that boil at less than 25°C
from each other at 1 atm.
Class 4: Steam distillations. Separating or isolating tars, oils, and other liquid compounds insoluble or slightly soluble, in water at all temperatures. These compounds do not have to be liquids at room temperature.


Detail...

The Reflux

This common procedure consists of mixing your reagents with a solvent, boiling the solvent, condensing the vapors, and returning them back to the flask. Observe these rules.



1. The flask should be big enough to hold both the reagents and the solvent without being more than half full.
2. Place the condenser upright on flask and clamp.
3. Adjust your heat source so that the vapors travel no further than halfway up the condenser. Add another condenser if your formula requires a specific temperature and you experience vapor travel higher than halfway at that temperature. Also use drying tube with anhydrousreagents.

Detail...

Elements of the Design Process

All design activities must do the following:
1. Know the “ customers’ needs”
2. De ne the essential problems that must be solved to satisfy the needs.
3. Conceptualize the solution through synthesis, which involves the task of satisfying several different functional requirements using a set of inputs such as product design parameters within given constraints.
4. Analyze the proposed solution to establish its optimum conditions and parameter settings.
5. Check the resulting design solution to see if it meets the original customer needs.

Design proceeds from abstract and qualitative ideas to quantitative descriptions. It is an iterative process by nature: new information is generated with each step, and it is necessary to evaluate the results in terms of the preceding step. Thus, design involves a continuous interplay between the requirements the designer wants to achieve and how the designer wants to achieve these requirements.
Designers often nd that a clear description of the design requirements is a dif cult task. Therefore, some designers deliberately leave them implicit rather than explicit. Then they spend a great deal of time trying to improve and iterate the design, which is time consuming at best. To be ef cient and generate the design that meets the perceived needs, the designer must speci cally state the users’ requirements before the synthesis of solution concepts can begin.
Solution alternatives are generated after the requirements are established. Many problems in mechanical engineering can be solved by applying practical knowledge of engineering, manufacturing, and economics. Other problems require far more imaginative ideas and inventions for their solution. The word “creativity” has been used to describe the human activity that results in ingenious or unpredictable or unforeseen results (e.g., new products, processes, and systems). In this context, creative solutions are discovered or derived by inspiration and/or perspiration, without ever de ning speci cally what one sets out to create. This creative “spark” or “revelation” may occur, since our brain is a huge information storage and processing device that can store data and synthesize solutions through the use of associative memory, pattern recognition, digestion and recombination of diverse facts, and permutations of events.
Design will always bene t when “inspiration” or “creativity,” and/or “imagination” plays a role, but this process must be augmented by amplifying human capability systematically through fundamental understanding of cognitive behavior and by the development of scienti c foundations for design methods.

Detail...

Basic of Electric Energy System Theory

CONCEPTS OF POWER IN ALTERNATING CURRENT
SYSTEMS


The electric power systems specialist is in many instances more concerned with electric power in the circuit rather than the currents. As the power into an element is basically the product of voltage across and current through it, it seems reasonable to swap the current for power without losing any information in describing the phenomenon. In treating sinusoidal steady-state
behavior of circuits, some further definitions are necessary. To illustrate the concepts, we will use a cosine representation of the waveforms.

Detail...

Radiant Energy

Radiant Energy is emitted as charged particles from all bodies at a constant rate. The mystery surrounding it will unlock untold amounts of energy. Under the proper conditions, it can be rendered susceptible to the most incomprehensible changes caused by the oscillations, pulsation and surging throughout the Universe.
It was discovered by the Curie’s that all matter is radioactive.
It can be shown that radium, barium, uranium, thorium, polonium, vanadium, cerium, molybdenum, zinc, aluminum, etc... will imprint radiographs on sensitive photographic paper. The Sun’s explosions hurl intense surges of Radiant Energy at the Earth. The atmosphere absorbs these particles, shielding all life on the planet from deadly harm. Electrical charge is induced into the clouds that form in the lower atmosphere. This is due to the fact that clouds are in constant motion and travel parallel to the highly charged upper atmosphere. Once enough charge is built up in a cloud, it discharges to the ground as lightening. Static charge is thus transformed into kinetic oscillations. The Earth’s magnetic field quivers in response to these surgings. Is it too far fetched to reason that we can duplicate this natural process to obtain electrical power? If an oscillating tank circuit has the correct impedance, reactance and inductance it will absorb energy from an external oscillating electrical source. Energy is captured. The tank oscillations can be kept alive by establishing resonance with the external source. Therefore, energy is not drawn from the transformer that powers the tank circuit. In the case of the Radiant Energy Receiver, it becomes possible to harness the atomic electrons that are generated within specially constructed energized plasma tubes. What I have just revealed here is the "Holy Grail of Energy." The implications of this
discovery are far and wide. Is humankind ready for such a revelation? What will be explained
in the pages to follow is nothing more than a glorified radio receiver, one that is designed to oscillate with the oscillations of the Universe. It locks onto the very wheelwork of nature. This Radiant Energy receiver should last for many years with very little maintenance, no more than for a good radio.



Tesla


RADIANT ENERGY," by Edgar Lucien Larkin (1903)

Radiant here means proceeding from a center in straight lines in every direction. Energy is internal and inherent. Professor Barker, "Physics," page 4, says: "Energy is defined as a condition of matter in which any definite portion may effect changes in any other definite portion." This was written in 1892 and discoveries since confirm it. Energy then, is a state of matter. Or rather, is it the result of a particular state in which matter may be when any observed phase of energy appears?
These two notions, matter and energy, or possibly one, are the sum total of all that has been found during three centuries of incessant research? This search has been in that portion of the Universe visible in a forty-inch telescope, armed with the most powerful spectroscope ever made. It is the belief of the writer that all this space is saturated with inconceivably minute corpuscles. J. J. Thomson recently discovered these. These are doubtless either electricity in its ultimate refinement, or very closely allied to it, or its immediate carriers. The smallest particle of hydrogen has long been thought to be the smallest mass of any known particle of matter. But the corpuscles detected by Thomson have only one-thousandth the mass of the hydrogen atom. The Earth and Sun, all suns and dark bodies in space, all granular matter, moves through the primordial cosmic mass of electrical corpuscles as would a wire screen through water. The wide spaces in diamond, glass, steel, flint, or anything else, allow these "bodies smaller than atoms," as Thomson calls them, to pass through.


Detail...

The Structure of The Power System

An interconnected power system is a complex enterprise that may be subdivided into the following major subsystems:
• Generation Subsystem
• Transmission and Subtransmission Subsystem
• Distribution Subsystem
• Utilization Subsystem

Generation Subsystem


This includes generators and transformers.
Generators – An essential component of power systems is the three phase ac generator known as synchronous generator or alternator. Synchronous generators have two synchronously rotating fields: One field is produced by the rotor driven at synchronous speed and excited by dc current. The other field is produced in the stator windings by the three-phase armature currents. The dc current for the rotor windings is provided by excitation systems. In the older
units, the exciters are dc generators mounted on the same shaft, providingexcitation through slip rings. Current systems use ac generators with rotating rectifiers, known as brushless excitation systems. The excitation system maintains generator voltage and controls the reactive power flow. Because they lack the commutator, ac generators can generate high power at high voltage, typically 30 kV.


Transmission and Subtransmission Subsystem


An overhead transmission network transfers electric power from generating units to the distribution system which ultimately supplies the load. Transmission lines also interconnect neighboring utilities which allow the economic dispatch of power within regions during normal conditions, and the transfer of power between regions during emergencies. Standard transmission voltages are established in the United States by the American National Standards Institute (ANSI). Transmission voltage lines operating at more than 60 kV are standardized at 69 kV, 115 kV, 138 kV, 161 kV, 230 kV, 345 kV, 500 kV, and 765 kV line-to-line. Transmission voltages above 230 kV are usually referred to as extra-high voltage (EHV).


Distribution Subsystem


The distribution system connects the distribution substations to the consumers’ service-entrance equipment. The primary distribution lines from 4 to 34.5 kV and supply the load in a well-defined geographical area. Some small industrial customers are served directly by the primary feeders. The secondary distribution network reduces the voltage for utilization
by commercial and residential consumers. Lines and cables not exceeding a few hundred feet in length then deliver power to the individual consumers. The secondary distribution serves most of the customers at levels of 240/120 V, single-phase, three-wire; 208Y/120 V, three-phase, four-wire; or 480Y/277 V, three-phase, four-wire. The power for a typical home is derived from a transformer that reduces the primary feeder voltage to 240/120 V using a three wire line. Distribution systems are both overhead and underground. The growth of underground distribution has been extremely rapid and as much as 70 percent of new residential construction is via underground systems.

Load Subsystems


Power systems loads are divided into industrial, commercial, and residential. Industrial loads are composite loads, and induction motors form a high proportion of these loads. These composite loads are functions of voltage and frequency and form a major part of the system load. Commercial and residential loads consist largely of lighting, heating, and cooking. These loads are independent of frequency and consume negligibly small reactive power.


El-Hawary

Detail...

Dictionary of Energy

Active Hot Water Solar Systems


Active solar hot water systems are available with air, liquid or liquid vapor collector fluids. Liquid is the most common. The production of hot water with an air-heating collector is usually done in large space-heating systems. It requires the installation of an air-to-water heat ex-
changer in an air duct or inside a rock storage bed. Losses during the heat transfer process are high and air systems do not perform as well as liquid units for heating water. Air systems are generally capable only of preheating water from 70 to 95°F.


Active Solar Systems
In active solar systems, there are a number of factors to be considered. Solar heat may handle space heating or domestic hot water or both. The site must be suitable and the system must be compatible with climate conditions. A back-up system may need to be integrated with the solar system. Sizing the system and storage is critical.




Adjustable Capacitor Banks


Capacitor banks can be designed to correct low PF at different kVA load levels to match the facility’s electrical load. The capacitor bank can be split into several steps of PF correction. Automatic controls are used to change the switching devices. Resonant conditions should be checked using harmonic analysis software before the filter bank is employed. Different combinations of filters are needed to dissipate specific harmonics. The normal procedure is to switch in the lower order filters first, and then add the higher order filters. The procedure is reversed when ilters are removed from service. This is done to prevent parallel resonant conditions that can amplify lower frequency harmonics. These conditions can be caused by the higher frequency filters.


Air Monitoring


Increasingly, legislation is targeted at the monitoring of volatile organic compounds (VOCs) in the atmosphere, many of which are suspected carcinogens or are acutely toxic. Monitoring requirements include rapid, multipoint, multicomponent analysis with the minimum of operator interference. Laboratory analysis can only give a snapshot of pollutants at the time of analysis. It does not analyze exposure to pollutants during a normal working period, nor can it detect sudden chemical leaks.
Thermal desorption involves the adsorption of VOCs on materials over a certain time period followed by desorption and analysis by a gas chromatograph. Thermal desorption only reports the average concentra tion recorded over several hours. It does not detect short-term exposure
to high levels of VOCs.

Air Quality


A BAS can be used to monitor a building’s health and make decisions to prevent the building’s health from degrading. Indoor air quality(IAQ) is important and has received more attention in recent years. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) has a standard for indoor air quality, (ASHRAE Standard 62-1989, Ventilation for Acceptable Indoor Air Quality). One of the key parts of the standard is the amount of fresh air required per person for various types of building environments. For office environments, the
minimum amount of fresh air is specified to be 20 cubic feet per minute (CFM) per person. This can be used to determine if the quantity of fresh air being introduced is adequate.


Alkaline Fuel Cells


Alkaline fuel cells (AFCs) use hydrogen and oxygen as fuel. The alkaline fuel cell is one of the oldest and most simple type of fuel cell. This is the type of fuel cell that has been used in space missions for some time. Hydrogen and oxygen are normally used as the fuel and oxidant. The electrodes are made of porous carbon plates which are laced with a catalyst to accelerate the chemical reactions. The electrolyte is potassium hydroxide. At the anode, the hydrogen gas combines with hydroxide ions to produce water vapor. This reaction results in electrons that are
left over. These electrons are forced out of the anode and produce the electric current. At the cathode, oxygen and water plus returning electrons from the circuit form hydroxide ions which are again recycled back to the anode. The basic core of the fuel cell consisting of the manifolds,
anode, cathode and electrolyte is called the stack.

Detail...

Reductions

Since this is the most important step in the production of amphetamines, I have created a special section describing the preliminaries and techniques in great detail. After spending a lot of time
and money to synthesize your nitropropene, you will be greatly disappointed to find that following the directions given in a journal is not enough to create an active compound.
There are some minor pitfalls that many scientists figure all their readers already know about, but if they don't know about them, their reduction will fail miserably, wasting the their time and the chemicalsinvolved.
Although many formulas listed are designed specifically for reducing to amphetamine type
compounds, they should work well on other drug synthesis calling for reductions. After finding
a suitable or compatible reduction formula, replace the nitrostyrene or nitropropene, etc., with
an equimolar ratio of the compound you wish to reduce. If I explained everything that I would
like you to know about reductions, this chapter would be about 200 pages longer than it already
is; obviously I cannot say everything, so I will stick with the basics.
Reductions in organic chemistry utilizing zinc, iron, and hydrogen sulfide, have been performed
since the 1840's. Catalytic hydrogenation came about in 1897, and reduction with metal hydrides
came into usage in 1947.


REDUCTIONS WITH METAL



Zinc. Next to sodium, zinc is the most used reductant. It is available in powder, dust, and
granular (mossy) forms. Zinc gets coated by a layer of zinc oxide which must be removed to
activate it before it can reduce effectively. It can easily be activated by shaking 3 to 4 min. in
a 1% to 2% hydrochloric acid solution. This means for every 98 ml of water volume, add 2 ml
of coned hydrochloric acid. Then wash this solution with water, ethatiol, acetone, and ether. Ot
activation can be accomplished by washing zinc in a solution of anhydrous zinc chloride (a very
small amount) in ether, alcohol, or tetrahydrofuran. Another way is to stir 180 g of zinc in a
solution of 1 g copper sulfate pentahydrate. Personally, I like the HC1 acid method.
Mossy zinc is activated by converting to zinc amalgam by brief immersion in amalgam solution.
(Use 40 g mossy zinc immersed in 4 g mercuric chloride, 4 ml concentrated HC1 acid, and 40
ml of water.) This type of amalgam can be used with powdered zinc also.
Reductions with zinc are very effective on aromatic nitro compounds using organic solvents
and an acid medium at around 50-70°, No matter what kind of metal is used, good stirring is
a must. After the reaction is over, the zinc is filtered off, care being taken not to let it become
dry, as it is pyrophoric. Also, be careful while disposing of zinc for the very same reasons.


Vogel



Detail...

Tuesday, April 27, 2010

Chromatographhy Definition

Vapor Phase Chromatography. This is accomplished by constructing or buying complicated and
expensive equipment. Although this method is very effective, it is superseded by the simple,
inexpensive and effective column chromatography.
Thin Layer Chromatography.
Thin layer chromatography is primarily a tool for small qualitive
analysis (deciding which solvents elute which substances, etc.). A microscopic amount of sample
is applied at one end of a small plate covered on one side with a thin absorbent coating. The
plate is then dipped into a shallow pool of solvent which rises on the coated layer, permitting
the compounds of the sample to move with the solvent to differing heights. The individual
components can then be detected as separate spots along the plate. Unfortunately this process can only be scaled up to do several grams at a time, again making column chromatography the
champion of chromatography. If, however, you wish to use thin layer, consult your local
library on methods. I chose not to go into depth on thin layer because it is so inferior to column style. solvent Column Chromatography.

The main idea here is to dissolve your mixture and put it on the adsorbent, at the top of the
column. Then you wash the mixture down the column using at least one eluent (solvent), perhaps more. The compounds of your mixture are carried along by the solvents and washed out of the column at different rates and collected into separate flasks. Why do you want to do this? Let us say you have a substance that needs to be purified, but it cannot be distilled because it decomposes at a low temperature, or you wish to extract one of many mixable liquid substances that have been mixed together, etc. A column chromatography can separate,
purify and extract. sand alumina, etc.
Now you may open the valve until there is a little over 1 cm of solvent above the top layer
of sand. If there are any cracks or air bubbles in the adsorbent, dump everything and start over.
Dissolve the mixture (your substance) in the same solvent you are going to put through the
column, keeping the amount as small as possible (this is called the analyate). You should be using
the least polar solvent that will dissolve your substance. Now you may add the analyate very
carefully; do not disturb the sand. Open the valve until the level of the column is the same as
it was before you added the analyate (1 cm above the sand). At no time let the solvent level
drop below the sand! Add the required eluent (solvent) to the column, not disturbing the sand.
Open the valve to slowly let the eluent run through the column until the first compound comes
out. Collect the different compounds in different flasks. At no time let the solvent drop below
the top of the sand! If necessary, stop the flow, add more eluent, and start the flow again.
Should the compounds be colored, you can watch them travel down the column and separate,
changing collection flasks as the colors change. If your compound is clear then you will have
to use one of the following steps:
1. Occasionally let one or two drops of eluent fall onto a microscope slide. Evaporate the
solvent and see if there are any properties of the compound that should be coming through,
such as crystal shapes, tastes, smells, viscosities if oil, etc.
2. Occasionally use several drops to spot, develop, and visualize a thin layer chromatography
plate. Although thin layer is very similar to column, you should read up on it as I do not
have time to go into the complete operation.
If you find the eluents are taking an excessive amount of time to wash down the compounds,
then switch to the next most polar solvent. If you had two compounds and one of them is already
collected, then go ahead and get some really polar solvent and get that last compound pronto.
List of solvents arranged in order of increasing polarity.

Detail...

Filtration Definition

Filtration by means of suction is employed, when possible, as this gives a more rapid and
complete separation of mother liquid from substance. Most any funnel can be made to work if equipped with a platform on which the filter paper can lay. Such a platform can be made from
a small ceramic plate with many small holes drilled through it or wire mesh. As long as theplatform does not react with your substance, it should be acceptable.


Some things to remember during vacuum (suction) filtration are: the funnel tip should be below
the vacuum source outlet, cut your filter paper to fit the funnel platform exactly, in other words,
do not let the paper rest on the sides of your funnel


Vogel

Detail...

Crystallization Definition

The solid product is seldom pure when obtained from a chemical reaction, being contaminated
with various impurities, reagents and byproducts. For purification, the process of crystallization,
sometimes called recrystallization, is generally employed. When dealing with large quantity
formulas,
the utmost care should be taken to obtain the maximum yield of a pure crystallized
compound.
Crystallization by Cooling. The ideal solvent is one in which the compound to be obtained in
pure crystalline form is insoluble at cold temperatures, but readily soluble at hot temperatures.
Also the impurities should either be insoluble or else very soluble and filtered accordingly to
remove. In real life operations, this perfect solvent cannot always be found, so the nearest
approach to it should be selected.
The solvents most commonly employed are: water, ethyl and methyl alcohol, ether, benzene,
petroleum ether, acetone, glacial acetic acid; also two or three solvents may be mixed to get the
desired effect as described later. If you still cannot dissolve the compound, try some of these:
chloroform, carbon disulfide, carbon tetrachloride, ethyl acetate, pyridine, hydrochloric acid,
sulfuric acid (acids are usually diluted first), nitrobenzene, aniline, phenol, dioxan, ethylene
dichloride, di, tri, tetrachloroethylene, tetrachloroethane, dichloroethyl ether, cyclohexane,
cyclohexanol, tetralin, decalin, triacetin, ethylene glycol and its esters and ethers, butyl alcohol,
diacetone alcohol, ethyl lactate, isopropyl ether, etc.
If unsure of what solvent to use, look in the Merck Index or in a chemistry handbook. This
may save you the time and expense of testing for the best solvent.
Choosing a Solvent. In order to select a suitable solvent, place small quantities, (50 to 100 mg)
of product into several test tubes and treat with a few drops of single solvents of the above class.
If the product dissolves easily in the cold upon shaking or if it does not dissolve appreciably on
boiling, the solvent in question may be regarded as unsuitable. Where the product 01 substance
dissolves on heating or boiling, and separates out again on cooling, the solvent used is suitable;
make sure that you choose the solvent that gives good crystals in the greatest abundance. At times, crystallization will not take place due to cooling or even supercooling; in such a case, the side of the glass container should be rubbed with a glass rod, and/or "seeded" by the addition of a very small amount of crude product, since such operations often induce crystallization. With
substances which are sparingly soluble in the common solvents, solvents of high boiling points
such as toluene, nitrobenzene, etc., should be used.
Where no single solvent is found suitable, a mixture of two mixable solvents, one of which
the product is soluble and the other insoluble, may be used. The substance is dissolved in a small
quantity of the solvent that has the strongest dissolving power, then the solvent that does not
dissolve the product, is added until complete crystallization occurs. This process can be carried out with or without heat. Let me use an example. You just dissolved a few grams of nitrostyrene
in a small (always use a small amount of solvent if possible) quantity of boiling ethanol and upon
cooling in a freezer no crystals appear. Next, you try "seeding" and another hour in the freezer,
but still no luck. By testing small amounts of the styrene with different solvents you find something that will not dissolve it, so you add this solvent slowly to the hot or cold styrene solution and the product crystallizes, if not you must now take much time to evaporate both solvents. Needless to say that this does little purification and may take days. Evaporation is greatly speeded up if done under vacuum conditions.
To Prepare Solutions. If considerable heating is necessary, a reflux condenser should be em-
ployed to avoid loss of solvent. Where the resulting solution does not require filtration, a conical
flask should always be used. During any heating, the contents of the vessel needs to be frequently shaken or stirred, since the crystals melt to a heavy oil settling on the bottom of the vessel making the vessel liable to crack.
In preparing the solution, an excessive amount need not be employed at first; successive small
quantities should be added to the boiling or near boiling solution until the substance just com-
pletely dissolves, or until nothing but impurities remain undissolved. With substances of low
melting point, care should be taken that concentrated solutions from which the substance com-
mences to separate at temperatures above its melting point are not used,
Crystallization by Evaporation. This method is employed when the substance is so easily soluble
in all solvents (hot or cold), that it will only crystallize after, partial or complete evaporation. If
complete evaporation must be employed, impurities will remain. So, if possible, filter off the
mother liquor (solvent), as this is where the dissolved impurities will be. If you should need to
heat the product with an effective solvent until thoroughly dissolved, pour through filter paper
to remove solid impurities.
The type of vessel employed depends on volatility of the solvent; obviously the conical flask
already recommended for "crystallization by cooling" is not suitable for spontaneous evaporation,
while a beaker or shallow dish is. When the latter type of vessel is used, "crusts" often form on
the sides above the surface of the liquid. Such crusts seldom consist of pure substance so they
should be removed carefully with a spatula or spoon before attempting to filter off the crystals.
Another method that can be used, if the above methods fail, is to dissolve the substance in
some solvent, then add a second solvent mixable with the first solvent, but in which the substance
is not soluble or sparingly soluble. The first solvent is then gradually removed and the substance
crystallizes back out. If the first solvent is more volatile than the second, it can be evaporated
out of the solution leaving the non-soluble solvent behind to crystallize the substance. If the first
(dissolving) solution is not as volatile as the second solution, place the solution in a desiccator
over some substance which absorbs the first solvent but not the second; in this way water may
be removed from a water-alcohol solution by caustic potash or quicklime.
If a substance can only be crystallized by total evaporation, it can usually be purified by distill-
ation first.

Vogel

Detail...

International Policies for Renewable Energy

The relevant key issues and condition which in uence any individual country and its speci c policy for promoting energy conservation and deployment of renewable energy technologies are determined by resources, targets, and constraints
the resources of renewable energies and their technical and economic exploitable potentials are one important issue. These conditions and the sources of renewable energies differ in a wide range between continents, countries and even on a regional scale inside individual countries. Even if the resources for a certain region or country are favorable, there often exist diverse constraints which can limit the use and the level of exploitation of the resources of renewable energies.
The limitations can result from the constraints, such as missing infrastructure or lack of nancial
resources for projects. Policy targets are the most relevant issues and always re ect the general attitude of governments or policy makers towards the promotion of renewable or conventional energy systems. In general these key factors are identical in each regional, national or international context but their magnitude may vary signi cantly on a regional, national and international scale. There are many ways to support renewable energies with special policies depending on technology, resources and policy targets. After all, there exist too many to discuss all of them within the scope of this handbook. For a speci c and detailed research on policies, the newly released “Global Renewable Energy Policies and Measures Database” [1] provides information for more than 100 countries worldwide with respect to policy types, technologies, and renewable energy targets. Examples for policy targets are de ned in the Kyoto-protocol
or in the “White Book” of the European Union as multinational agreements, as well as several national goals and action plans to increase the level of energy supply using renewable energies or to reach a certain level of renewable energy supply within a certain time scale.

Detail...

Global Energy System

Global energy consumption in the last half century has increased very rapidly and is expected to continue to grow over the next 50 years. However, we expect to see signi cant differences between the last 50 years and the next.


The past increase was stimulated by relatively “cheap” fossil fuels and increased rates of industrialization in North America, Europe, and Japan; yet while energy consumption in these countries continues to increase, additional factors are making the picture for the next 50 years more complex.
These additional complicating factors include the very rapid increase in energy use in China and India (countries representing about a third of the world’s population); the expected depletion of oil resources in the not-too-distant future; and the effect of human activities on global climate change. On the positive side, the renewable energy (RE) technologies of wind, biofuels, solar thermal, and photovoltaics (PV) are nally showing maturity and the ultimate promise of cost competitiveness.

Yogi Goswami

Detail...