Friday, May 28, 2010

Vacuum Pump


Pumps can be broadly categorized according to three techniques

^ Positive displacement pumps use a mechanism to repeatedly expand a cavity, allow gases to flow in from the chamber, seal off the cavity, and exhaust it to the atmosphere.

^ Momentum transfer pumps, also called molecular pumps, use high speed jets of dense fluid or high speed rotating blades to knock gas molecules out of the chamber.

^ Entrapment pumps capture gases in a solid or adsorbed state. This includes cryopumps, getters, and ion pumps.


Positive displacement pumps are the most effective for low vacuums. Momentum transfer pumps in conjunction with one or two positive displacement pumps are the most common configuration used to achieve high vacuums. In this configuration the positive displacement pump serves two purposes. First it obtains a rough vacuum in the vessel being evacuated before the momentum transfer pump can be used to obtain the high vacuum, as momentum transfer pumps cannot start pumping at atmospheric pressures. Second the positive displacement pump backs up the momentum transfer pump by evacuating to low vacuum the accumulation of displaced molecules in the high vacuum pump. Entrapment pumps can be added to reach ultrahigh vacuums, but they require periodic regeneration of the surfaces that trap air molecules or ions. Due to this requirement their available operational time can be unacceptably short in low and high vacuums, thus limiting their use to ultrahigh vacuums. Pumps also differ in details like manufacturing tolerances, sealing material, pressure, flow, admission or no admission of oil vapor, service intervals, reliability, tolerance to dust, tolerance to chemicals, tolerance to liquids and vibration.

Fluids cannot be pulled, so it is technically impossible to create a vacuum by suction. Suction is the movement of fluids into a vacuum under the effect of a higher external pressure, but the vacuum has to be created first. The easiest way to create an artificial vacuum is to expand the volume of a container. For example, the diaphragm muscle expands the chest cavity, which causes the volume of the lungs to increase. This expansion reduces the pressure and creates a partial vacuum, which is soon filled by air pushed in by atmospheric pressure

To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind positive displacement pumps, like the manual water pump for example. Inside the pump, a mechanism expands a small sealed cavity to create a deep vacuum. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size.

Wiki


Detail...

Boyle's Law


Boyle’s law states that at constant temperature for a fixed mass, the absolute pressure and the volume of a gas are inversely proportional. The law can also be stated in a slightly different manner, that the product of absolute pressure and volume is always constant.


Most gases behave like ideal gases at moderate pressures and temperatures. The technology of the 1600s could not produce high pressures or low temperatures. Hence, the law was not likely to have deviations at the time of publication. As improvements in technology permitted higher pressures and lower temperatures, deviations from the ideal gas behavior would become noticeable, and the relationship between pressure and volume can only be accurately described employing real gas theory.[7] The deviation is expressed as the compressibility factor.

Robert Boyle (and Edme Mariotte) derived the law solely on experimental grounds. The law can also be derived theoretically based on the presumed existence of atoms and molecules and assumptions about motion and perfectly elastic collisions (see kinetic theory of gases). These assumptions were met with enormous resistance in the positivist scientific community at the time however, as they were seen as purely theoretical constructs for which there was not the slightest observational evidence.

Daniel Bernoulli in 1738 derived Boyle's law using Newton's laws of motion with application on a molecular level. It remained ignored until around 1845, when John Waterston published a paper building the main precepts of kinetic theory; this was rejected by the Royal Society of England. Later works of James Prescott Joule, Rudolf Clausius and in particular Ludwig Boltzmann firmly established the kinetic theory of gases and brought attention to both the theories of Bernoulli and Waterston.[8]

The debate between proponents of Energetics and Atomism led Boltzmann to write a book in 1898, which endured criticism up to his suicide in 1906.[8] Albert Einstein in 1905 showed how kinetic theory applies to the Brownian motion of a fluid-suspended particle, which was confirmed in 1908 by Jean Perrin.

Wiki

Detail...

Conservation of Energy

The law of conservation of energy is an empirical law of physics. It states that the total amount of energy in an isolated system remains constant over time (is said to be conserved over time). A consequence of this law is that energy can neither be created nor destroyed, it can only be transformed from one state to another. The only thing that can happen to energy in a closed system is that it can change form, for instance chemical energy can become kinetic energy.



Albert Einstein's theory of relativity shows that energy and mass are the same thing, and that neither one appears without the other. Thus in closed systems, both mass and energy are conserved separately, just as was understood in pre-relativistic physics. The new feature of relativistic physics is that "matter" particles (such as those constituting atoms) could be converted to non-matter forms of energy, such as light; or kinetic and potential energy (example: heat). However, this conversion does not affect the total mass of systems, since the latter forms of non-matter energy still retain their mass through any such conversion.[1]

Today, conservation of “energy” refers to the conservation of the total system energy over time. This energy includes the energy associated with the rest mass of particles and all other forms of energy in the system. In addition the invariant mass of systems of particles (the mass of the system as seen in its center of mass inertial frame, such as the frame in which it would need to be weighed), is also conserved over time for any single observer, and (unlike the total energy) is the same value for all observers. Therefore, in an isolated system, although matter (particles with rest mass) and "pure energy" (heat and light) can be converted to one another, both the total amount of energy and the total amount of mass of such systems remain constant over time, as seen by any single observer. If energy in any form is allowed to escape such systems (see binding energy) the mass of the system will decrease in correspondence with the loss.

A consequence of the law of energy conservation is that perpetual motion machines can only work perpetually if they deliver no energy to their surroundings. If such machines produce more energy than is put into them, they must lose mass and thus eventually disappear over perpetual time, and are therefore not possible

Wiki

Detail...

Wednesday, May 26, 2010

Fluid Statics


Fluid statics (also called hydrostatics) is the science of fluids at rest, and is a sub-field within fluid mechanics. The term usually refers to the mathematical treatment of the subject. It embraces the study of the conditions under which fluids are at rest in stable equilibrium. The use of fluid to do work is called hydraulics, and the science of fluids in motion is fluid dynamics.


Due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface. If a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force. Thus, the pressure on a fluid at rest is isotropic; i.e., it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes; i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe.

This concept was first formulated, in a slightly extended form, by the French mathematician and philosopher Blaise Pascal in 1647 and would later be known as Pascal's law. This law has many important applications in hydraulics.

Any body of arbitrary shape which is immersed, partly or fully, in a fluid will experience the action of a net force in the opposite direction of the local pressure gradient. If this pressure gradient arises from gravity, the net force is in the vertical direction opposite that of the gravitational force. This vertical force is termed buoyancy or buoyant force and is equal in magnitude, but opposite in direction, to the weight of the displaced fluid.

In the case of a ship, for instance, its weight is balanced by shear force from the displaced water, allowing it to float. If more cargo is loaded onto the ship, it would sink more into the water - displacing more water and thus receive a higher buoyant force to balance the increased weight.

Discovery of the principle of buoyancy is attributed to Archimedes.

Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards an equilibrium. However, on small length scales, there is an important balancing force from surface tension.

When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull.

Wiki

Detail...

Fluid Dynamics

In physics, fluid dynamics is a sub-discipline of fluid mechanics that deals with fluid flow—the natural science of fluids (liquids and gases) in motion. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and reportedly modeling fission weapon detonation. Some of its principles are even used in traffic engineering, where traffic is treated as a continuous fluid.




The Fluid dynamics offers a systematic structure that underlies these practical disciplines, that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time.

Historically, hydrodynamics meant something different than it does today. Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability—both also applicable in, as well as being applied to, gases.oundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum (also known as Newton's Second Law of Motion), and conservation of energy (also known as First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds Transport Theorem.

In addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption considers fluids to be continuous, rather than discrete. Consequently, properties such as density, pressure, temperature, and velocity are taken to be well-defined at infinitesimally small points, and are assumed to vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.

For fluids which are sufficiently dense to be a continuum, do not contain ionized species, and have velocities small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier-Stokes equations, which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve. Some of them allow appropriate fluid dynamics problems to be solved in closed form.

Wiki

Detail...

Metrology

Metrology is defined by the International Bureau of Weights and Measures (BIPM) as "the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology."[1] The ontology and international vocabulary of metrology (VIM) is maintained by the International Organisation for Standardisation.

A core concept in metrology is (metrological) traceability, defined as "the property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons, all having stated uncertainties." The level of traceability establishes the level of comparability of the measurement: whether the result of a measurement can be compared to the previous one, a measurement result a year ago, or to the result of a measurement performed anywhere else in the world.

Traceability is most often obtained by calibration, establishing the relation between the indication of a measuring instrument and the value of a measurement standard. These standards are usually coordinated by national laboratories: National Institute of Standards and Technology (USA), National Physical Laboratory, UK, etc.

Tracebility, accuracy, precision, systematic bias, evaluation of measurement uncertainty are critical parts of a quality management system.


Mistakes can make measurements and counts incorrect. If there are no mistakes, all counts will be exactly correct. Even if there are no mistakes, nearly all measurements are still inexact. The term 'error' is reserved for that inexactness, also called measurement uncertainty. Among the few exact measurements are:

The absence of the quantity being measured, such as a voltmeter with its leads shorted together: the meter should read zero exactly.
Measurement of an accepted constant under qualifying conditions, such as the triple point of pure water: the thermometer should read 273.16 kelvin (0.01 degrees Celsius, 32.018 degrees Fahrenheit) when qualified equipment is used correctly.
Self-checking ratio metric measurements, such as a potentiometer: the ratio in between steps is independently adjusted and verified to be beyond influential inexactness.
All other measurements either have to be checked to be sufficiently correct or left to chance. Metrology is the science that establishes the correctness of specific measurement situations. This is done by anticipating and allowing for both mistakes and error. The precise distinction between measurement error and mistakes is not settled and varies by country. Repeatability and reproducibility studies help quanitfy the precision: one common method is an ANOVA Gauge R&R study.

Calibration is the process where metrology is applied to measurement equipment and processes to determine conformity with a known standard of measurement, usually tracable to a national standards board.

Wiki

Detail...

Monday, May 10, 2010

The Mechanics of Deception Cryptography

Secret communication is nothing new. Books on the history of cryptography reveal that hidden messages date to as far back as there are records. After millennia of efforts to conceal messages, one might think that every imaginable cryptographic technique was tried long ago and is now widely known. "Not so," according to Morten St. George, author of a book on cryptic thinking.

St. George maintains that forty-two of the Nostradamus prophecies employ a unique type of cryptography that until now has never been identified, catalogued, or reused. Moreover, St. George claims that it may be the most powerful form of cryptography ever devised. Surprisingly, however, its techniques are extremely simple; it employs no complex mathematical codes or anything like that. As St. George puts it: "Its enormous strength lies in deception. If you don't know that cryptographic techniques are being employed, if you don't suspect that there is a hidden communication underneath, then you look no further than what's on the surface and the real meaning forever evades you." St. George calls it "deception cryptography."



To illustrate deception cryptography and its techniques, I will return to the "historical secrets" of an earlier article to show how those secrets were uncovered. In case an acknowledgment box for this article doesn't appear in all places, I am inserting here an opportunity to thank Morten St. George for this follow-up interview, for technical assistance, and for permission to derive material from his cryptic thinking book. Note that the book's analysis often spans many pages and in a paragraph or two I can only hope to point to the key elements.

*** Napoleon Bonaparte was murdered on his island of captivity by poison in the wine, instigated by a woman enraged over the defeat of his army in 1813.

St. George informs me that countless Nostradamians have meticulously examined each and every stanza looking for tie-ins with Napoleon, and in the end they believe they found dozens of stanzas that apply to Napoleon. St. George observes that there are only two stanzas that really do apply to Napoleon, and to his knowledge, no Nostradamian has ever found either one of them.

The stanza that deals with Napoleon's death is numbered VIII-13. There, the road to Napoleon lies in the first few words of the third verse: "Army to a thousand years." You need a number for adding a thousand years, but internally this stanza has no numbers. You are therefore forced to resort to the stanza number, VIII-13, or 813, producing the year 1813. The army of 1813 was one of the most famous in all of history, Napoleon's Grandee Armee. In 1813, in the aftermath of its retreat from Russia, that great army wholly disintegrated. In the last verse, the poison drink (inferred from elsewhere to be wine) kills two people. The second person was Napoleon's bodyguard.

Often, the prophecies affirm a theme elsewhere. In VIII-13, the affirmation lies in the second verse that invokes Greek mythology, a classic myth of betrayal, referring to the hero Bellerophon by name. Bellerophon was the name of the English ship that took Napoleon into captivity.

*** President John F. Kennedy was assassinated by a group of conspirators led by his vice-president, Lyndon Baines Johnson. Officially accused by the Warren Commission, Lee Harvey Oswald was completely innocent since the bullets that killed Kennedy were fired from a rooftop, not from an open window.

St. George derives this from the stanza numbered VI-37. "From a rooftop, evil ruin... Innocent of the act, after his death, he shall be accused." Then comes the all-critical final verse, beginning "The guilty one hidden." According to St. George, "hidden" is a magical word in the prophecies. You have to take it literally. The name of the guilty one is hidden in the French words that follow: "taillis" to the "bruyne." Symbolic of the death of a Roman Catholic (Kennedy) would be the Latin cross, so that's what you have to create. Put "taillis" directly on top of "bruyne." For the left side of the cross, bring down the three letters "B", "A", and "I". For the right side of the cross, bring down the three letters "N", "E", and "S". That spells out "Baines." Now looking at the middle of the cross, up on top, we see the letters "LY". Four letters have to be invented to complete the central shaft on the bottom end. The guilty one: LYndon BAINES.

*** The JFK conspirators were also behind the assassination of Kennedy's brother, Senator Robert F. Kennedy, several years later.

The answer here lies in stanza VI-11. This stanza begins by talking about a family with seven children, retroactively clarifying that the children at that time consisted of four sisters and three brothers, of which the oldest two will be surprised by death. The last two verses read: to kill the two brothers (an instance of grammatical deception) they shall be seduced; the conspirators shall pass away in their sleep. It's clear: these conspirators killed both Kennedy brothers, and since the conspirators died of natural causes, we can infer that they got away with it.

In The Mechanics of Deception Cryptography - Part II, I plan to show how deception cryptography ingeniously conveys information about the assassination of Martin Luther King, the attempted assassination of Pope John Paul II, and other secrets. At this point, I took a break from the technical stuff to ask Morten St. George if he can foresee future events in the prophecies.

Unfortunately, St. George answered in the negative. He says that beyond the inherent obscurity of deception cryptography, future events are rarely envisioned, and if you can't envision it, you can't predict it, no matter how clear or unclear the wording might be. St. George gave me a couple of examples, "Can you envision before the event that a phrase like "dead alive like a stump" could be alluding to Ronald Reagan's brain-dead Press Secretary? But after the event you can see it. Or can you envision before the event that a phrase like "from the sky shall come a great king of terror" could be alluding to hijacked airplanes on a terror mission? Only after the event it makes sense."

St. George continued: "For sure, anyone can use the prophecies to make predictions but that doesn't mean they will come true. I have a good one for you, a natural disaster beyond human manipulation: Earthquake Strikes in December 2006. It could be an earthquake in the North Atlantic, with tidal wave reaching London, or it could be an earthquake that hits the Greek city of Corinth. Mind you, the stanzas in question were already spectacularly successful for past events, and normally there would be no expectation of a second application. But here there is. I see a danger of failure, not of prophecy, but of one of the fundamental ciphers of deception cryptography. The prophecies do not fail."

Deception cryptography clearly has its weaknesses, and also a few unusual strengths, on which I plan to expand next time.

Gersiane De Brito

Detail...

Nondestructive testing

Nondestructive testing (NDT) is a wide group of analysis techniques used in science and industry to evaluate the properties of a material, component or system without causing damage.[1] Because NDT does not permanently alter the article being inspected, it is a highly-valuable technique that can save both money and time in product evaluation, troubleshooting, and research. Common NDT methods include ultrasonic, magnetic-particle, liquid penetrant, radiographic, and eddy-current testing.[1] NDT is a commonly-used tool in forensic engineering, mechanical engineering, electrical engineering, civil engineering, systems engineering, aeronautical engineering, medicine, and art



Methods
NDT methods may rely upon use of electromagnetic radiation, sound, and inherent properties of materials to examine samples. This includes some kinds of microscopy to examine external surfaces in detail, although sample preparation techniques for metallography, optical microscopy and electron microscopy are generally destructive as the surfaces must be made smooth through polishing or the sample must be electron transparent in thickness. The inside of a sample can be examined with penetrating electromagnetic radiation, such as X-rays, or with sound waves in the case of ultrasonic testing. Contrast between a defect and the bulk of the sample may be enhanced for visual examination by the unaided eye by using liquids to penetrate fatigue cracks. One method (liquid penetrant testing) involves using dyes, fluorescent or non-fluorescing, in fluids for non-magnetic materials, usually metals. Another commonly used method for magnetic materials involves using a liquid suspension of fine iron particles applied to a part while it is in an externally applied magnetic field (magnetic-particle testing).

Example:

Weld verification

In manufacturing, welds are commonly used to join two or more metal surfaces. Because these connections may encounter loads and fatigue during product lifetime, there is a chance that they may fail if not created to proper specification. For example, the base metal must reach a certain temperature during the welding process, must cool at a specific rate, and must be welded with compatible materials or the joint may not be strong enough to hold the surfaces together, or cracks may form in the weld causing it to fail. The typical welding defects, lack of fusion of the weld to the base metal, cracks or porosity inside the weld, and variations in weld density, could cause a structure to break or a pipeline to rupture.

Welds may be tested using NDT techniques such as industrial radiography using X-rays or gamma rays, ultrasonic testing, liquid penetrant testing or via eddy current and flux linkage. In a proper weld, these tests would indicate a lack of cracks in the radiograph, show clear passage of sound through the weld and back, or indicate a clear surface without penetrant captured in cracks.

Welding techniques may also be actively monitored with acoustic emission techniques before production to design the best set of parameters to use to properly join two materials.

Structural mechanics
Structures can be complex systems that undergo different loads during their lifetime. Some complex structures, such as the turbomachinery in a liquid-fuel rocket, can also cost millions of dollars. Engineers will commonly model these structures as coupled second-order systems, approximating dynamic structure components with springs, masses, and dampers. These sets of differential equations can be used to derive a transfer function that models the behavior of the system.

In NDT testing, the structure undergoes a dynamic input, such as the tap of a hammer or a controlled impulse. Key properties, such as displacement or acceleration at different points of the structure, are measured as the corresponding output. This output is recorded and compared to the corresponding output given by the transfer function and the known input. Differences may indicate an inappropriate model (which may alert engineers to unpredicted instabilities or performance outside of tolerances), failed components, or an inadequate control system.


wikipedia.org

Detail...

Destructive testing

In destructive testing, tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behaviour under different loads. These tests are generally much easier to carry out, yield more information, and are easier to interpret than nondestructive testing.

Destructive testing is most suitable, and economic, for objects which will be mass produced, as the cost of destroying a small number of specimens is negligible. It is usually not economic to do destructive testing where only one or very few items are to be produced (for example, in the case of a building).

Some types of destructive testing:

Stress tests
Crash tests
Hardness tests
Metallographic tests


Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries, such as fatigue testing for materials.

A crash test is a form of destructive testing usually performed in order to ensure safe design standards in crashworthiness and crash compatibility for automobiles or related components.

Hardness is the measure of how resistant solid matter is to various kinds of permanent shape change when a force is applied. Macroscopic hardness is generally characterized by strong intermolecular bonds, however the behavior of solid materials under force is complex, therefore there are different measurements of hardness: scratch hardness, indentation hardness, and rebound hardness.

Hardness is dependent on ductility, elasticity, plasticity, strain, strength, toughness, viscoelasticity, and viscosity.

Common examples of hard matter are ceramics, concrete, metals, and superhard materials, which can be contrasted with soft matter

Metallography is the study of the physical structure and components of metals, typically using microscopy.

Ceramic and polymeric materials may also be prepared using metallographic techniques, hence the terms ceramography, plastography and, collectively, materialography.

wikipedia.org

Detail...

Scientists marvel at “asphalt” volcanoes

Some 10 miles (16 km) off the coast of San­ta Bar­ba­ra, Calif., a se­ries of strange land­marks rise from the ocean floor. They’ve been there for 40,000 years, hid­den in the Pa­cif­ic’s murky depth­s—un­til now, sci­en­tists say.

They’re called as­phalt vol­ca­noes.

“They’re mas­sive fea­tures, and are made com­pletely out of as­phalt,” said Da­vid Val­en­tine, a geo­sci­en­tist at Uni­vers­ity of Cal­i­for­nia at San­ta Bar­ba­ra and the lead au­thor of a pa­per pub­lished on­line this week in the jour­nal Na­ture Ge­o­sci­ence. “They’re larg­er than a football-field-long and as tall as a six-story build­ing.”


sea Ice Age domes lies at a depth of 700 feet (220 me­ters), too deep for scu­ba div­ing, which ex­plains why hu­mans haven’t seen them, said Don Rice, di­rec­tor of the U.S. Na­tional Sci­ence Founda­t­ion’s Chem­i­cal Ocean­og­ra­phy Pro­gram, which funded the re­search.

Asphalt is a sticky black sub­stance found in pet­ro­leum and often used for pav­ing. In so-called “as­phalt” roads, though, grav­el or sand are mixed with the true as­phalt, which sol­id­ifies at cooler temp­er­atures.

Val­en­tine and col­leagues first viewed the vol­ca­noes dur­ing a 2007 dive on a re­search sub­ma­rine dubbed Al­vin. Val­en­tine cred­its Ed Kel­ler, an earth sci­ent­ist at the uni­vers­ity, with guid­ing him and col­leagues to the site. “Ed had looked at some ba­thym­e­try [sea floor to­pog­ra­phy] stud­ies con­ducted in the 1990s and not­ed some very un­usu­al fea­tures,” Val­en­tine said.



A slab from an as­phalt vol­ca­no dis­covered on the sea-floor of the San­ta Bar­bara Chan­nel. (Cre­dit: Os­car Piz­ar­ro, U. of Syd­ney)

Based on Kel­ler’s re­search, Val­en­tine and oth­er sci­en­tists took Al­vin in­to the ar­ea in 2007 and dis­cov­ered the source of the mys­tery. Us­ing the sub’s robotic arm, the re­search­ers broke off sam­ples and brought them to labs for test­ing. In 2009, Val­en­tine and col­leagues con­ducted a de­tailed sur­vey of the ar­ea us­ing an au­ton­o­mous un­der­wa­ter ve­hi­cle, Sen­try, which takes pho­tos as it glides about nine feet above the ocean floor.

“When you ‘fly’ Sen­try over the sea floor, you can see all of the crack­ing of the as­phalt and flow fea­tures,” said Val­en­tine. “All the tex­tures are vis­i­ble of a once-flowing liq­uid that has so­lid­i­fied in place. That’s one of the rea­sons we’re call­ing them vol­ca­noes, be­cause they have so many fea­tures that are in­dic­a­tive of a la­va flow.”

Tests showed that these aren’t your typ­i­cal la­va vol­ca­noes, how­ev­er, found in Ha­waii and else­where around the Pa­cif­ic Rim. Us­ing an ar­ray of tech­niques, the sci­en­tists de­ter­mined that the struc­tures are as­phalt, formed when pe­tro­le­um flowed from the sea-floor about 30,000-40,000 years ago.

“The vol­ca­noes un­der­score a little-known fact: half the oil that en­ters the coast­al en­vi­ron­ment is from nat­u­ral oil seeps like the ones off the coast of Cal­i­for­nia,” said Chris Reddy of the Woods Hole Oce­a­no­graphic In­sti­tu­tion in Woods Hole, Mass., a co-au­thor of the pa­per.

world-science.net

Detail...

Sunday, May 9, 2010

Air Pressure

The weight of air resting on a given area of the Earth's surface is known as air pressure. Air pressure (or atmospheric pressure) is always greatest at sea level, where the air is at its most dense. Therefore at the top of a mountain the air is less dense and therefore the pressure is lower.

The air is composed of billions of tiny particles that are constantly moving in all directions, bouncing off whatever they encounter. These collisions constitute what is known as air pressure. The more collisions occurring within a certain area then the greater the air pressure will be.



We are completely unaware of this, but the air is constantly exerting pressure on us, on average this is 14 � pounds per square inch. (1 kg per sq cm ) . Air molecules are naturally drawn towards the earth by gravity, and as a consequence the density of the air is greater near the surface of the earth. Therefore the number of molecules in a given area, the air pressure, decreases with altitude. These molecules are in constant motion and this prevents them from settling at ground level.

At sea level, standard air pressure is 1013, but typically the pressure varies between 980 and 1040 millibars (mb). As with any aspect of the atmosphere there are extremes and the highest and lowest recorded pressures are as follows:

The highest recorded atmospheric pressure, 1085.7 mb, occurred at Tonsontsengel, Mongolia, 19 December 2001.

The lowest sea level air pressure ever recorded was 870 mb in the eye of Typhoon Tip over the Pacific Ocean on October 12th 1979


Air pressure is measured using a barometer. Although the changes are usually too slow to observe directly, air pressure is almost always changing.

Weather maps showing the pressure at the surface are drawn using millibars. Air pressure can tell us about what kind of weather to expect as well. Winds blow in an attempt to combat the differences in air pressure. Wind is the movement of air over the surface of the Earth, from areas of high pressure to low pressure. A large change in pressure over a relatively small distance, a large pressure gradient, can result in far stronger winds. When the isobars are tightly packed, locations within that large pressure gradient can expect windy conditions. As air rises and creates an area of low pressure, water vapour in the atmosphere will condense and form clouds. However sinking air, in an area of high pressure, means that no condensation will take place. This is why low pressure is associated with cloudy skies and unsettled conditions, and high pressure is associated with clearer skies and drier conditions.

Winds near the Earth's surface rotate anti clockwise toward the centre of areas of low pressure and clockwise outward from the centre of areas of high pressure in the Northern Hemisphere, with an opposite flow (clockwise around areas of low pressure and counter clockwise around areas of high pressure) occurring in the Southern Hemisphere. The main reason for this pattern is the Coriolis force, which results from the Earth's rotation on its axis and deflects wind to the right in the Northern Hemisphere and to the left in the Southern Hemisphere.

Mark Boardman

Detail...

A linear motor

A linear motor is-simply speaking-an electric motor that uses a linear force mechanism to generate the power needed for a said application. In contrast to a rotational electric motor (found in automobiles, appliances, and commonly-used electrical equipment), a linear motor generates its energy output through exclusively linear scientific principles; i.e. there is no torque or rotation to produce accelerated force through the electrical current magnetic field relationship. Linear motors are used for a variety of purposes, which include high velocity trains, military weaponry, spacecraft exploration, robotic technologies, medical advancement, and automated engineering systems whose job is to produce mass amounts of a specified product.


There are two basic types of linear motors: low-acceleration and high-acceleration. Low-acceleration motors are typically used for applications in which endurance is favored over high bursts of electromechanical power or energy. These types of linear motors are engineered for Maglev trains, automated applications systems, etc. High-acceleration motors are the more common of the two, and produce higher velocity outputs for shorter amounts of time; such as used in firearms, military equipment, spacecraft propulsion, and the like. Low-acceleration linear motors are designed to accelerate an object up to a continuous stabile speed, while high-acceleration linear motors will accelerate an object up to a very high speed and then release the object. Typically, the low-acceleration linear motor will be engineered with one winding system on one side of the motor and magnets on the other side to create the electromagnetic repulsion necessary for successful application force; this is called linear synchronous design. The high-acceleration linear motor will generally be constructed of a three-phase winding on one side and a conductor plate on the other side of the motor to meet the intended engineering objective; this is called linear induction design.

Linear motors offer a number of advantages in this ever-evolving technological world. Whether the high power application your company or organization requires necessitates a low- or high-accelerated lateral motor system, linear motors assure faster acceleration and higher velocities as well as higher success rates in automated accuracy, repeatability, and long-term reliability.

For more information on and examples of linear motors, please visit: Airex Corporation Linear Motors'>http://www.airex.com/products/linear.htm">Airex Corporation Linear Motors

Alexis Gibrault

Detail...