Wednesday, February 11, 2009

Electronic

The Electronic information

Electronics refers to the flow of charge (moving electrons) through nonmetal conductors (mainly semiconductors), whereas electrical refers to the flow of charge through metal conductors. For example, flow of charge through silicon, which is not a metal, would come under electronics; whereas flow of charge through copper, which is a metal, would come under electrical. This distinction started around 1906 with the invention by Lee De Forest of the triode. Until 1950 this field was called "Radio techniques" because its principal application was the design and theory of radio transmitters, receivers and vacuum tubes.
The study of semiconductor devices and related technology is considered a branch of physics whereas the design and construction of electronic circuits to solve practical problems comes under electronics engineering. This article focuses on engineering aspects

Electronic devices and components:

An electronic component is any physical entity in an electronic system whose intention is to affect the electrons or their associated fields in a desired manner consistent with the intended function of the electronic system. Components are generally intended to be in mutual electromechanical contact, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly or in more complex groups as integrated circuits. Some common electronic components are capacitors, resistors, diodes, transistors, etc.
Analog circuits:
Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage as opposed to discrete levels as in digital circuits.
The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components.
Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.
Some analog circuitry these days may use digital or even microprocessor techniques to improve upon the basic performance of the circuit. This type of circuit is usually called "mixed signal."
Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but puts out only one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output.

Electronics theory:

Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.
Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current though a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.
Also important to electronics is the study and understanding of electromagnetic field theory.

Computer aided design (CAD):

Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), Ea"gle PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus) and many others."

Construction methods:

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wraps were other methods used. Most modern day electronics now use printed circuit boards (made of FR4), and highly integrated circuits. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European Union, with its Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE), which went into force in July 2006.

Technique


Technique:
Picks are usually gripped with two fingers—thumb and index—and are played with pointed end facing the strings. However, it's a matter of personal preference and many notable musicians use different grips. For example, Eddie Van Halen holds the pick between his thumb and middle finger; James Hetfield and Steve Morse hold a pick using 3 fingers—thumb, middle and index; Pat Metheny also holds the pick with three fingers but plays using the rounded side of the plectrum. George Lynch also uses the rounded side of the pick. Stevie Ray Vaughan also played with the rounded edge of the pick, citing the fact that the edge allowed more string attack than the tip. His manic, aggressive picking style would wear through pickguards in short order, and wore a groove in his beloved Fender Stratocaster, Number One, over his years of playing. Jimmy Rogers and Freddie King had a special kind of technique utilizing two picks at once.
The motion of the pick against the string is also a personal choice. George Benson and Dave Mustaine, for example, hold the pick very stiffly between the thumb and index finger, locking the thumb joint and striking with the surface of the pick nearly parallel to the string, for a very positive, articulate, consistent tone. Other guitarists have developed a technique known as circle picking, where the thumb joint is bent on the downstroke, and straightened on the upstroke, causing the tip of the pick to move in a circular pattern. Circle picking can allow greater speed and fluidity. The angle of the pick against the string is also very personal and has a broad range of effects on tone and articulation. Many rock guitarists will use a flourish (called a pick slide or pick scrape) that involves scraping the pick along the length of a round wound string (a round wound string is a string with a coil of round wire wrapped around the outside, used for the heaviest three or four strings on a guitar; this wrapping creates a rippled surface that produces quite a distinct sound when scraped with a pick).
The two chief approaches to picking are alternate picking and economy picking. Alternate picking is when the player strictly alternates each stroke between downstrokes and upstrokes, regardless of changing strings. In economy picking, the player will use the most economical stroke on each note. For example, if the first note is on the fifth string, and the next note is on the fourth string, the pick will use a downstroke on the fifth string, and continue in the same direction to execute a downstroke on the fourth string. The economy picking technique sounds as though it would require more conscious thought to execute it but many guitarists learn it intuitively and find it an effort to use alternate picking. Conversely, some guitarists maintain that the down-up "twitch" motion of alternate picking lends itself to momentum better, and hence trumps economy picking at high speeds.
Jazz guitarist Tuck Andress has written a comprehensive article on picking technique, often cited on the web.
Picks wear out with use, and many guitarists prefer the playing "feel" of new picks.

In popular culture
Usually, a guitar pick is hidden within a player's hand, so a casual viewer may think that a guitarist plays with bare hands. However, some guitarists may fling their picks out into the audience in an attempt to prompt a dramatic effect from those listening. Direct references to guitar picks are usually considered as a sign of somebody having close relation to playing an instrument.

Johnny Ramone's guitar pick.
Dreamweb, a 1994 computer game, starts with main protagonist going to his friend, whose apartment's floor is covered with guitar picks spread randomly. This fact emphasizes that the friend is an avid guitarist.
Tenacious D in The Pick of Destiny features a mystical guitar pick carved from the tooth of Satan, which possesses "supara-natural" qualities (a "whole other level above super-natural").
Some fashion studios[3][4] offer jewelry made of guitar picks, such as guitar pick necklaces, earrings, pendants, chains, etc. Guitar pick jewelry complements merchandise line usually produced by an artist (i.e., t-shirts, bandannas and other memorable items).
In the film Wild Zero, Guitar Wolf uses electric picks as a weapon against zombies
Buddy Holly, the 1950s rocker, always hid an extra pick behind his pickguard. When restoring his 1958 Fender Stratocaster in 2006, the pick was discovered.[6]
The video game Guitar Hero: On Tour for the Nintendo DS includes a guitar pick shaped stylus which the player can use to strum the touchscreen. a

Tuesday, February 10, 2009

Technology


Technology is a broad concept that deals with an animal species' usage and knowledge of tools and crafts, and how it affects an animal species' ability to control and adapt to its environment. Technology is a term with origins in the Greek "technologia", "τεχνολογία" — "techne", "τέχνη" ("craft") and "logia", "λογία" ("saying").[1] However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. The term can either be applied generally or to specific areas: examples include "construction technology", "medical technology", or "state-of-the-art technology".
The human race's use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.
Technology has affected society and its surroundingas in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.
Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technologaay improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, claiming that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.

Monday, February 9, 2009

Homogeneous exposure



A large number of homogeneous exposure units. The vast majority of insurance policies are provided for individual members of very large classes. Automobile insurance, for example, covered about 175 million automobiles in the United States in 2004.[2] The existence of a large number of homogeneous exposure units allows insurers to benefit from the so-called “law of large numbers,” which in effect states that as the number of exposure units increases, the actual results are increasingly likely to become close to expected results. There are exceptions to this criterion. Lloyd's of London is famous for insuring the life or health of actors, actresses and sports figures. Satellite Launch insurance covers events that are infrequent. Large commercial property policies may insure exceptional properties for which there are no ‘homogeneous’ exposure units. Despite failing on this criterion, many exposures like these are generally considered to be insurable.
Definite Loss. The event that gives rise to the loss that is subject to insurance should, at least in principle, take place at a known time, in a known place, and from a known cause. The classic example is death of an insured person on a life insurance policy. Fire, automobile accidents, and worker injuries may all easily meet this criterion. Other types of losses may only be definite in theory. Occupational disease, for instance, may involve prolonged exposure to injurious conditions where no specific time, place or cause is identifiable. Ideally, the time, place and cause of a loss should be clear enough that a reasonable person, with sufficient information, could objectively verify all three elements.
Accidental Loss. The event that constitutes the trigger of a claim should be fortuitous, or at least outside the control of the beneficiary of the insurance. The loss should be ‘pure,’ in the sense that it results from an event for which there is only the opportunity for cost. Events that contain speculative elements, such as ordinary business risks, are generally not considered insurable.
Large Loss. The size of the loss must be meaningful from the perspective of the insured. Insurance premiums need to cover both the expected cost of losses, plus the cost of issuing and administering the policy, adjusting losses, and supplying the capital needed to reasonably assure that the insurer will be able to pay claims. For small losses these latter costs may be several times the size of the expected cost of losses. There is little point in paying such costs unless the protection offered has real value to a buyer.
Affordable Premium. If the likelihood of an insured event is so high, or the cost of the event so large, that the resulting premium is large relative to the amount of protection offered, it is not likely that anyone will buy insurance, even if on offer. Further, as the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer. If there is no such chance of loss, the transaction may have the form of insurance, but not the substance. (See the U.S. Financial Accounting Standards Board standard number 113)
Calculable Loss. There are two elements that must be at least estimable, if not formally calculable: the probability of loss, and the attendant cost. Probability of loss is generally an empirical exercise, while cost has more to do with the ability of a reasonable person in possession of a copy of the insurance policy and a proof of loss associated with a claim presented under that policy to make a reasonably definite and objective evaluation of the amount of the loss recoverable as a result of the claim.
Limited risk of catastrophically large losses. The essential risk is often aggregation. If the same event can cause losses to numerous policyholders of the same insurer, the ability of that insurer to issue policies becomes constrained, not by factors surrounding the individual characteristics of a given policyholder, but by the factors surrounding the sum of all policyholders so exposed. Typically, insurers prefer to limit their exposure to a loss from a single event to some small portion of their capital base, on the order of 5 percent. Where the loss can be aggregated, or an individual policy could produce exceptionally large claims, the capital constraint will restrict an insurer's appetite for additional policyholders. The classic example is earthquake insurance, where the ability of an underwriter to issue a new policy depends on the number and size of the policies that it has already underwritten. Wind insurance in hurricane zones, particularly along coast lines, is another example of this phenomenon. In extreme cases, the aggregation can affect the entire industry, since the combined capital of insurers and reinsurers can be small compared to the needs of potential policyholders in areas exposed to aggregation risk. In commercial fire insurance it is possible to find single properties whose total exposed value is well in excess of any individual insurer’s capital constraint. Such properties are generally shared among several insurers, or are insured by a single insurer who syndicates the risk into the reinsurance market.