1. In the diesel engine, engine power and speed are controlled by





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->A car during its journey travels 30 minutes at the speed of 40 km/hr. another 45 minutes at the speed of 60 km /hr and for two hours at a speed of 70 km/hr. Find the average speed of the car?....
QA->India govt plan to construct new corridor for high speed train with speed range 300-350 kmph. What is the present maximum speed of long distance train in India?....
QA->Which is not the common component between a petrol and diesel engine?....
QA->Who discovered Diesel Engine ?....
QA->Who discovered Diesel Oil Engine ?....
MCQ-> The broad scientific understanding today is that our planet is experiencing a warming trend over and above natural and normal variations that is almost certainly due to human activities associated with large-scale manufacturing. The process began in the late 1700s with the Industrial Revolution, when manual labor, horsepower, and water power began to be replaced by or enhanced by machines. This revolution, over time, shifted Britain, Europe, and eventually North America from largely agricultural and trading societies to manufacturing ones, relying on machinery and engines rather than tools and animals.The Industrial Revolution was at heart a revolution in the use of energy and power. Its beginning is usually dated to the advent of the steam engine, which was based on the conversion of chemical energy in wood or coal to thermal energy and then to mechanical work primarily the powering of industrial machinery and steam locomotives. Coal eventually supplanted wood because, pound for pound, coal contains twice as much energy as wood (measured in BTUs, or British thermal units, per pound) and because its use helped to save what was left of the world's temperate forests. Coal was used to produce heat that went directly into industrial processes, including metallurgy, and to warm buildings, as well as to power steam engines. When crude oil came along in the mid- 1800s, still a couple of decades before electricity, it was burned, in the form of kerosene, in lamps to make light replacing whale oil. It was also used to provide heat for buildings and in manufacturing processes, and as a fuel for engines used in industry and propulsion.In short, one can say that the main forms in which humans need and use energy are for light, heat, mechanical work and motive power, and electricity which can be used to provide any of the other three, as well as to do things that none of those three can do, such as electronic communications and information processing. Since the Industrial Revolution, all these energy functions have been powered primarily, but not exclusively, by fossil fuels that emit carbon dioxide (CO2), To put it another way, the Industrial Revolution gave a whole new prominence to what Rochelle Lefkowitz, president of Pro-Media Communications and an energy buff, calls "fuels from hell" - coal, oil, and natural gas. All these fuels from hell come from underground, are exhaustible, and emit CO2 and other pollutants when they are burned for transportation, heating, and industrial use. These fuels are in contrast to what Lefkowitz calls "fuels from heaven" -wind, hydroelectric, tidal, biomass, and solar power. These all come from above ground, are endlessly renewable, and produce no harmful emissions.Meanwhile, industrialization promoted urbanization, and urbanization eventually gave birth to suburbanization. This trend, which was repeated across America, nurtured the development of the American car culture, the building of a national highway system, and a mushrooming of suburbs around American cities, which rewove the fabric of American life. Many other developed and developing countries followed the American model, with all its upsides and downsides. The result is that today we have suburbs and ribbons of highways that run in, out, and around not only America s major cities, but China's, India's, and South America's as well. And as these urban areas attract more people, the sprawl extends in every direction.All the coal, oil, and natural gas inputs for this new economic model seemed relatively cheap, relatively inexhaustible, and relatively harmless-or at least relatively easy to clean up afterward. So there wasn't much to stop the juggernaut of more people and more development and more concrete and more buildings and more cars and more coal, oil, and gas needed to build and power them. Summing it all up, Andy Karsner, the Department of Energy's assistant secretary for energy efficiency and renewable energy, once said to me: "We built a really inefficient environment with the greatest efficiency ever known to man."Beginning in the second half of the twentieth century, a scientific understanding began to emerge that an excessive accumulation of largely invisible pollutants-called greenhouse gases - was affecting the climate. The buildup of these greenhouse gases had been under way since the start of the Industrial Revolution in a place we could not see and in a form we could not touch or smell. These greenhouse gases, primarily carbon dioxide emitted from human industrial, residential, and transportation sources, were not piling up along roadsides or in rivers, in cans or empty bottles, but, rather, above our heads, in the earth's atmosphere. If the earth's atmosphere was like a blanket that helped to regulate the planet's temperature, the CO2 buildup was having the effect of thickening that blanket and making the globe warmer.Those bags of CO2 from our cars float up and stay in the atmosphere, along with bags of CO2 from power plants burning coal, oil, and gas, and bags of CO2 released from the burning and clearing of forests, which releases all the carbon stored in trees, plants, and soil. In fact, many people don't realize that deforestation in places like Indonesia and Brazil is responsible for more CO2 than all the world's cars, trucks, planes, ships, and trains combined - that is, about 20 percent of all global emissions. And when we're not tossing bags of carbon dioxide into the atmosphere, we're throwing up other greenhouse gases, like methane (CH4) released from rice farming, petroleum drilling, coal mining, animal defecation, solid waste landfill sites, and yes, even from cattle belching. Cattle belching? That's right-the striking thing about greenhouse gases is the diversity of sources that emit them. A herd of cattle belching can be worse than a highway full of Hummers. Livestock gas is very high in methane, which, like CO2, is colorless and odorless. And like CO2, methane is one of those greenhouse gases that, once released into the atmosphere, also absorb heat radiating from the earth's surface. "Molecule for molecule, methane's heat-trapping power in the atmosphere is twenty-one times stronger than carbon dioxide, the most abundant greenhouse gas.." reported Science World (January 21, 2002). “With 1.3 billion cows belching almost constantly around the world (100 million in the United States alone), it's no surprise that methane released by livestock is one of the chief global sources of the gas, according to the U.S. Environmental Protection Agency ... 'It's part of their normal digestion process,' says Tom Wirth of the EPA. 'When they chew their cud, they regurgitate [spit up] some food to rechew it, and all this gas comes out.' The average cow expels 600 liters of methane a day, climate researchers report." What is the precise scientific relationship between these expanded greenhouse gas emissions and global warming? Experts at the Pew Center on Climate Change offer a handy summary in their report "Climate Change 101. " Global average temperatures, notes the Pew study, "have experienced natural shifts throughout human history. For example; the climate of the Northern Hemisphere varied from a relatively warm period between the eleventh and fifteenth centuries to a period of cooler temperatures between the seventeenth century and the middle of the nineteenth century. However, scientists studying the rapid rise in global temperatures during the late twentieth century say that natural variability cannot account for what is happening now." The new factor is the human factor-our vastly increased emissions of carbon dioxide and other greenhouse gases from the burning of fossil fuels such as coal and oil as well as from deforestation, large-scale cattle-grazing, agriculture, and industrialization.“Scientists refer to what has been happening in the earth’s atmosphere over the past century as the ‘enhanced greenhouse effect’”, notes the Pew study. By pumping man- made greenhouse gases into the atmosphere, humans are altering the process by which naturally occurring greenhouse gases, because of their unique molecular structure, trap the sun’s heat near the earth’s surface before that heat radiates back into space."The greenhouse effect keeps the earth warm and habitable; without it, the earth's surface would be about 60 degrees Fahrenheit colder on average. Since the average temperature of the earth is about 45 degrees Fahrenheit, the natural greenhouse effect is clearly a good thing. But the enhanced greenhouse effect means even more of the sun's heat is trapped, causing global temperatures to rise. Among the many scientific studies providing clear evidence that an enhanced greenhouse effect is under way was a 2005 report from NASA's Goddard Institute for Space Studies. Using satellites, data from buoys, and computer models to study the earth's oceans, scientists concluded that more energy is being absorbed from the sun than is emitted back to space, throwing the earth's energy out of balance and warming the globe."Which of the following statements is correct? (I) Greenhouse gases are responsible for global warming. They should be eliminated to save the planet (II) CO2 is the most dangerous of the greenhouse gases. Reduction in the release of CO2 would surely bring down the temperature (III) The greenhouse effect could be traced back to the industrial revolution. But the current development and the patterns of life have enhanced their emissions (IV) Deforestation has been one of the biggest factors contributing to the emission of greenhouse gases Choose the correct option:....
MCQ-> Cells are the ultimate multi-taskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked ‘under the hood’ of the cell and laid out the nuts and bolts of molecular engines.The ability of such engines to convert chemical energy into motion is the envy nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a five-foot-wide engine were as efficient, it would travel 170 to 340 kilometres per hour.Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two ‘legs’. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor — molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap,” says Vale.On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a spring-like stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching three inches (eight centimetres) per second.Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use spring-like engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolutely,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, it will also enable us to copy them, take apart their components and recreate them for other purpose.”According to the author, research on the power source of movement in cells can contribute to
 ....
MCQ-> Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.But we have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day-to-day. On the contrary, they actively seek certainty and stability. They want to know they will be safe.Even so we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us Out of business — so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? Structure may not have changed much. It may just be the speed with which we do things.Change is over-rated, anyway, consider the automobile. It’s an especially valuable example, because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with an internal combustion, gasoline-powered engine, four wheels with rubber types, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 1 8 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with an internal combustion, gasoline-powered engine, four wheels with rubber tyres a foot operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!That’s not a hell of a lot of return for the money. Ford evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities — making for an almost impregnable entry barrier.Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.Taken together, this lack of real change has come to man that in travel — whether driving or flying — time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.At the same time, moving around in cars or aero-planes becomes less and less efficient all the time Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become mobile or photographic film, which also required an entire century to change.The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design. (The same existed in the 1970s in Japan, and in the 1980s in France.) So for 40 years we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors, and would be loath to cannibalize their expertise, along with most of their factoriesAccording to the above passage, which of the following statements is true?
 ....
MCQ-> Governments looking for easy popularity have frequently been tempted into announcing give-aways of all sorts; free electricity, virtually free water, subsidised food, cloth at half price, and so on. The subsidy culture has gone to extremes. The richest farmers in the country get subsidised fertiliser. University education, typically accessed by the wealtier sections, is charged at a fraction of cost. Postal services are subsidised, and so are railway services. Bus fares cannot be raised to economical levels because there will be violent protests, so bus travel is subsidised too. In the past, price control on a variety of items, from steel to cement, meant that industrial consumers of these items got them at less than actual cost, while the losses of the public sector companies that produced them were borne by the taxpayer! A study, done a few years ago, came to the conclusion that subsidies in the Indian economy total as much as 14.5 per cent of gross domestic product. At today's level, that would work out to about Rs. 1,50,000 crore.And who pays the bill? The theory — and the political fiction on the basis of which it is sold to unsuspecting voters — is that subsidies go to the poor, and are paid for by the rich. The fact is that most subsidies go to the ‘rich’ (defined in the Indian context as those who are above the poverty line, and much of the tab goes indirectly to the poor. Because the hefty subsidy bill results in fiscal deficits, which in turn push up rates of inflation — which, as everyone knows, hits the poor the hardest of all. Indeed, that is why taxmen call inflation the most regressive form of taxation.The entire subsidy system is built on the thesis that people cannot help themselves, therefore governments must do so. That people cannot afford to pay for a variety of goods and services, and therefore the government must step in. This thesis has been applied not just in the poor countries but in the rich ones as well; hence the birth of the welfare state in the West, and an almost Utopian social security system; free medical care, food aid, old age security, et al. But with the passage of time, most of the wealthy nations have discovered that their economies cannot sustain this social safety net, which infact reduces the desire among people to pay their own way, and takes away some of the incentive to work. In short, the bill was unaffordable, and their societies were simply not willing to pay. To the regret of many, but because of the laws of economics are harsh, most Western societies have been busy pruning the welfare bill.In India, the lessons of this experience — over several decades, and in many countries — do not seem to have been learnt. Or, they are simply ignored in the pursuit of immediate votes. People who are promised cheap food or clothing do not in most cases look beyond the gift horses — to the question of who picks up the tab The uproar over higher petrol, diesel and cooking gas prices ignored this basic question: if the user of cooking gas does not want to pay for its cost, who should pay? Diesel in the country is subsidised, and if the trucker or owner of a diesel generator does not want to pay for its full cost, who does he or she think should pay the balance of the cost? It is a simple question, nevertheless it remains unasked.The Deve Gowda government has shown some courage in biting the bullet when it comes to the price of petroleum products. But it has been bitten by a much bigger subsidy bug. It wants to offer food at half its cost to everyone below the poverty line, supposedly estimated at some 380 million people. What will be the cost? And, of course, who will pick up the tab? The Andhra Pradesh Government has been bankrupted by selling rice at Rs. 2 per kg. Should the Central Government be bankrupted too, before facing up to the question of what is affordable and what is not? Already, India is perenially short of power because the subsidy on electricity has bankrupted most electricity boards, and made private investment wary unless it gets all manner of state guarantees.Delhi’s subsidised bus fares have bankrupted the Delhi Transport Corporation., whose buses have slowly disappeared from the capital's streets. It is easy to be soft and sentimental, by looking at programmes that will be popular. After all, who doesn't like a free lunch? But the evidence is surely mounting that the lunch isn't free at all. Somebody is paying the bill. And if you want to know who, take a look at the country's poor economic performance over the years.Which of the following should not be subsidised now, according to the passage?
 ....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions