1. The main memory of a computer can also be called - ?





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->A computer has 8 MB in main memory, 128 KB cache with block size of 4KB. If direct mapping scheme is used, how many different main memory blocks can map into a given physical cache block?....
QA->A computer with a 32 bit wide data bus implements its memory using 8 K x 8 static RAM chips. The smallest memory that this computer can have is:....
QA->A byte addressable computer has memory capacity of 4096 KB and can perform 64 operations. An instruction involving 3 memory operands and one operator needs:....
QA->Routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. This type of loading is called:....
QA->The maximum size of main memory of a computer is determined by:....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
MCQ-> Read the following passage carefully and answer the questions given below it. Certain words/phrases have been given in bold to help you locate them while answering some of the questions: In every religion, culture and civilization feeding the poor and hungry is considered one of the most noble deeds. However such large scale feeding will require huge investment both in resources and time. A better alternative is to create conditions by which proper wholesome food is available to all the rural poor at affordable price. Getting this done will be the biggest charity.Our work with the rural poor in villages of Western Maharashtra has shown that most of these people are landless laborers. After working the whole day in the fields in scorching sun they come home in the evening and have to cook for the whole family. The cooking is done on the most primitive chulha (wood stove) which results in tremendous indoor air pollution. Many of them also have no electricity so they use primitive and polluting kerosene lamps. World Health Organization (WHO) data has shown that about 300,000 deaths/ year in India can be directly attributed to indoor air pollution in such -nuts. At the same time this pollution results in many respiratory ailments and these people spend close Rs. 200-400 per month on medical bills. Besides the pollution, rural poor also eat very poor diet. They eat  whatever is available daily at Public Distribution System (PDS) shops and most of the times these shops are out of rations. Thus they cook whatever is available. The hard work together with poor eating takes a heavy toll on their health. Besides this malnutrition also affects the physical and mental health of their children and may lead to creation of a whole generation of mentally challenged citizens. So I feel that the best way to provide adequate food for rural poor is by setting up rural restaurants on large scale. These restaurants will be similar to regular ones but for people below poverty line (BPL) they will provide meals at subsidized rates. These citizens will pay only Rs. 10 per meal and the rest, which is expected to be quite small, will come as a part of Government subsidy. With existing open market prices of vegetables and groceries average cost of simple meal for a family of four comes to Rs. 50 per meal or Rs. 12.50 per person per meal. If the PDS prices are taken for the groceries then the average cost will be Rs. 7.50 per person per meal. This makes the subsidy approximately Rs. 2.50 per person per meal only and hence quite small. The buying of meals could be by the use of UID (Aadhar) card by rural poor. The total cost should be Rs. 30 per day for three vegetarian meals of breakfast, lunch and dinner. The rural poor will get better nutrition and tasty food by eating  in these restaurants. Besides the time saved can be used for resting and other gainful activities like teaching children. Since the food will not be cooked in huts, this strategy will result in less pollution in rural households. This will be beneficial for their health. Besides, women's chores will be reduced drastically. Another advantage of eating in these restaurants will be increased social interaction of rural poor since this could also become a meeting place. Eating in restaurants will also require fewer utensils in house and hence less expenditure. For other things like hot water for bath, making tea, boiling milk and cooking on holidays some utensils and fuel will be required. Our Institute NARI has developed an extremely efficient and environment-friendly stove which provides simultaneously both light and heat for cooking and hence may provide the necessary functions. Providing reasonably priced wholesome food is the basic aim and program of Government of India (GOI). This is the basis of their much touted food security  program.However in 65years they have not been able to do so. Thus I feel a public private partnership can help in this. To help the restaurant owners the GOI or state Governments should provide them with soft loans and other line of credit for setting up such facilities. Corporate world can take this up as a part of their corporate social responsibility activity. Their participation will help ensure good quality restaurants and services. Besides the charitable work, this will also make good business sense. McDonald's-type restaurant systems for rural areas can be a good model to be set up for quality control both in terms of hygiene and in terms of quality of food material. However focus will be on availability of wholesome simple vegetarian food in these restaurants.More clientele (volumes) will make these restaurants economical. Existing models of dhabas, udipi type restaurants etc. can be used in this scheme. These restaurants may also be able to provide midday meals in rural schools. At present the midday meal program is faltering due to various reasons. Food coupons in western countries provide cheap food for poor. However quite a number of fast food restaurants in US do not accept them. Besides these coupons are most of the times used for non-food items, it will be mandatory for rural restaurants to accept payment via UID cards for BPL citizens. Existing soup kitchens, lagers and temple food are based on charity. For large scale rural use it should be based on good social enterprise  business model. Cooking food in these restaurants will also result in much more efficient use of energy since energy/ kg of food cooked in households is greater than that in restaurants. The main thing however will be to reduce drastically the food wastage In these restaurants. Rural restaurants can also be forced to use clean fuels like LPG or locally produced biomass-based liquid fuels. This strategy is very difficult to enforce for individual households. Large scale employment generation in rural areas may result because of this activity. With an average norm of 30 people employed/ 100-chair restaurant, this program has the potential of generating about 20 million jobs permanently in rural areas. Besides the infrastructure development in setting up restaurants and establishing the food chain etc will help the local farmers and will create huge wealth generation in these areas. In the long run this strategy may provide better food security for rural poor than the existing one which is based on cheap food availability in PDS - a system which is prone to corruption and leakage.In accordance with the view expressed by the writer of this article, what is the biggest charity ?
 ....
MCQ-> Read the given passage carefully and select the best answer to each question out of the four given alternatives. Computers have become indispensable in the modern times. From information to having fun, you can possibly do everything with the help of this amazing machine. For the modern day child, computers are vital and the amount of time that they devote on them has constantly been on the rise. One of the most popular things with children when it comes to the computer are video games or the computer games. From puzzles to racing, action to sports, strategy to adventure, computer games are possibly the biggest addiction with most children. With companies such as Sony and Microsoft going all out to promote Xbox and Playstation to children worldwide, the allure to these games has only got better. These video games not only help in making child's brain sharper through mental stimulation but it also helps relieve them of anxiety or pain. In some cases, games have proved to aid in dyslexic kids reading better. Since adults also love playing games, it can be a time of bonding between adults and children, increasing the amount of time spent together especially when time spent by parents and children is very less nowadays. On the other hand, the addiction to these computer games can severely harm the child. Since children keep on playing for long hours, it can lead to eye damage. The impact of excessive visual medium is evident as a large number of children these days start wearing spectacles from an early age. Long hours of playing computer games can also result in headaches and dizziness.One of the most popular things with children when it comes to the computer?
 ....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> Cells are the ultimate multi-taskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked ‘under the hood’ of the cell and laid out the nuts and bolts of molecular engines.The ability of such engines to convert chemical energy into motion is the envy nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a five-foot-wide engine were as efficient, it would travel 170 to 340 kilometres per hour.Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two ‘legs’. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor — molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap,” says Vale.On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a spring-like stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching three inches (eight centimetres) per second.Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use spring-like engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolutely,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, it will also enable us to copy them, take apart their components and recreate them for other purpose.”According to the author, research on the power source of movement in cells can contribute to
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions