1. Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Show Similar Question And Answers
QA->A cyclist goes 40 km towards East and then turning to right he goes 40 km. Again he turn to his left and goes 20 km. After this he turns to his left and goes 40 km, then again turns right and goes 10km. How far is he from his starting point?....
QA->The economic condition in which the rate at which the general level of prices for goods and services is rising and subsequently; purchasing power is falling is called?....
QA->In the case of purchases costing above…………the purchasing officer shall forward a draft agreement to the firm along with the supply order directing that the consignment need be sent only after executing agreement.....
QA->In a club 70% members read English news papers and 75% members read Malayalam news papers, while 20% do not read both papers. If 325 members read both the news papers, then the total numbers in the club is .........?....
QA->Telecommunications giant who will buy Yahoo’s core web and advertising business for $.83 billion and continue to operate them under the Yahoo name?....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> I want to stress this personal helplessness we are all stricken with in the face of a system that has passed beyond our knowledge and control. To bring it nearer home, I propose that we switch off from the big things like empires and their wars to more familiar little things. Take pins for example! I do not know why it is that I so seldom use a pin when my wife cannot get on without boxes of them at hand; but it is so; and I will therefore take pins as being for some reason specially important to women.There was a time when pinmakers would buy the material; shape it; make the head and the point; ornament it; and take it to the market, and sell it and the making required skill in several operations. They not only knew how the thing was done from beginning to end, but could do it all by themselves. But they could not afford to sell you a paper of pins for the farthing. Pins cost so much that a woman's dress allowance was calling pin money.By the end of the 18th century Adam Smith boasted that it took 18 men to make a pin, each man doing a little bit of the job and passing the pin on to the next, and none of them being able to make a whole pin or to buy the materials or to sell it when it was made. The most you could say for them was that at least they had some idea of how it was made, though they could not make it. Now as this meant that they were clearly less capable and knowledgeable men than the old pin-makers, you may ask why Adam Smith boasted of it as a triumph of civilisation when its effect had so clearly a degrading effect. The reason was that by setting each man to do just one little bit of the work and nothing but that, over and over again, he became very quick at it. The men, it is said, could turn out nearly 5000 pins a day each; and thus pins became plentiful and cheap. The country was supposed to be richer because it had more pins, though it had turned capable men into mere machines doing their work without intelligence and being fed by the spare food of the capitalist just as an engine is fed with coals and oil. That was why the poet Goldsmith, who was a farsighted economist as well as a poet, complained that 'wealth accumulates, and men decay'.Nowadays Adam Smith's 18 men are as extinct as the diplodocus. The 18 flesh-and-blood men have been replaced by machines of steel which spout out pins by the hundred million. Even sticking them into pink papers is done by machinery. The result is that with the exception of a few people who design the machines, nobody knows how to make a pin or how a pin is made: that is to say, the modern worker in pin manufacture need not be one-tenth so intelligent, skilful and accomplished as the old pinmaker; and the only compensation we have for this deterioration is that pins are so cheap that a single pin has no expressible value at all. Even with a big profit stuck on to the cost-price you can buy dozens for a farthing; and pins are so recklessly thrown away and wasted that verses have to be written to persuade children (without success) that it is a sin to steal, if even it’s a pin.Many serious thinkers, like John Ruskin and William Morris, have been greatly troubled by this, just as Goldsmith was, and have asked whether we really believe that it is an advance in wealth to lose our skill and degrade our workers for the sake of being able to waste pins by the ton. We shall see later on, when we come to consider the Distribution of Leisure, that the cure for this is not to go back to the old free for higher work than pin-making or the like. But in the meantime the fact remains that the workers are now not able to make anything themselves even in little bits. They are ignorant and helpless, and cannot lift their finger to begin their day's work until it has all been arranged for them by their employer's who themselves do not understand the machines they buy, and simply pay other people to set them going by carrying out the machine maker's directions.The same is true for clothes. Earlier the whole work of making clothes, from the shearing of the sheep to the turning out of the finished and washed garment ready to put on, had to be done in the country by the men and women of the household, especially the women; so that to this day an unmarried woman is called a spinster. Nowadays nothing is left of all this but the sheep shearing; and even that, like the milking of cows, is being done by machinery, as the sewing is. Give a woman a sheep today and ask her to produce a woollen dress for you; and not only will she be quite unable to do it, but you are likely to find that she is not even aware of any connection between sheep and clothes. When she gets her clothes, which she does by buying them at the shop, she knows that there is a difference between wool and cotton and silk, between flannel and merino, perhaps even between stockinet and other wefts; but as to how they are made, or what they are made of, or how they came to be in the shop ready for her to buy, she knows hardly anything. And the shop assistant from whom she buys is no wiser. The people engaged in the making of them know even less; for many of them are too poor to have much choice of materials when they buy their own clothes.Thus the capitalist system has produced an almost universal ignorance of how things are made and done, whilst at the same time it has caused them to be made and done on a gigantic scale. We have to buy books and encyclopaedias to find out what it is we are doing all day; and as the books are written by people who are not doing it, and who get their information from other books, what they tell us is twenty to fifty years out of date knowledge and almost impractical today. And of course most of us are too tired of our work when we come home to want to read about it; what we need is cinema to take our minds off it and feel our imagination.It is a funny place, this word of capitalism, with its astonishing spread of education and enlightenment. There stand the thousands of property owners and the millions of wage workers, none of them able to make anything, none of them knowing what to do until somebody tells them, none of them having the least notion of how it is made that they find people paying them money, and things in the shops to buy with it. And when they travel they are surprised to find that savages and Esquimaux and villagers who have to make everything for themselves are more intelligent and resourceful! The wonder would be if they were anything else. We should die of idiocy through disuse of our mental faculties if we did not fill our heads with romantic nonsense out of illustrated newspapers and novels and plays and films. Such stuff keeps us alive, but it falsifies everything for us so absurdly that it leaves us more or less dangerous lunatics in the real world.Excuse my going on like this; but as I am a writer of books and plays myself, I know the folly and peril of it better than you do. And when I see that this moment of our utmost ignorance and helplessness, delusion and folly, has been stumbled on by the blind forces of capitalism as the moment for giving votes to everybody, so that the few wise women are hopelessly overruled by the thousands whose political minds, as far as they can be said to have any political minds at all, have been formed in the cinema, I realise that I had better stop writing plays for a while to discuss political and social realities in this book with those who are intelligent enough to listen to me.A suitable title to the passage would be
 ....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
MCQ-> Read the following passage carefully and answer the questions given at the end.Passage 4Public sector banks (PSBs) are pulling back on credit disbursement to lower rated companies, as they keep a closer watch on using their own scarce capital and the banking regulator heightens its scrutiny on loans being sanctioned. Bankers say the Reserve Bank of India has started strictly monitoring how banks are utilizing their capital. Any big-ticket loan to lower rated companies is being questioned. Almost all large public sector banks that reported their first quarter results so far have showed a contraction in credit disbursal on a year-to-date basis, as most banks have shifted to a strategy of lending largely to government-owned "Navratna" companies and highly rated private sector companies. On a sequential basis too, banks have grown their loan book at an anaemic rate.To be sure, in the first quarter, loan demand is not quite robust. However, in the first quarter last year, banks had healthier loan growth on a sequential basis than this year. The country's largest lender State Bank of India grew its loan book at only 1.21% quarter-on-quarter. Meanwhile, Bank of Baroda and Punjab National Bank shrank their loan book by 1.97% and 0.66% respectively in the first quarter on a sequential basis.Last year, State Bank of India had seen sequential loan growth of 3.37%, while Bank of Baroda had seen a smaller contraction of 0.22%. Punjab National Bank had seen a growth of 0.46% in loan book between the January-March and April-June quarters last year. On a year-to-date basis, SBI's credit growth fell more than 2%, Bank of Baroda's credit growth contracted 4.71% and Bank of India's credit growth shrank about 3%. SBI chief Arundhati Bhattacharya said the bank's year-to-date credit growth fell as the bank focused on ‘A’ rated customers. About 90% of the loans in the quarter were given to high-rated companies. "Part of this was a conscious decision and part of it is because we actually did not get good fresh proposals in the quarter," Bhattacharya said.According to bankers, while part of the credit contraction is due to the economic slowdown, capital constraints and reluctance to take on excessive risk has also played a role. "Most of the PSU banks are facing pressure on capital adequacy. It is challenging to maintain 9% core capital adequacy. The pressure on monitoring capital adequacy and maintaining capital buffer is so strict that you cannot grow aggressively," said Rupa Rege Nitsure, chief economist at Bank of Baroda.Nitsure said capital conservation pressures will substantially cut down "irrational expansion of loans" in some smaller banks, which used to grow at a rate much higher than the industry average. The companies coming to banks, in turn, will have to make themselves more creditworthy for banks to lend. "The conservation of capital is going to inculcate a lot of discipline in both banks and borrowers," she said.For every loan that a bank disburses, some amount of money is required to be set aside as provision. Lower the credit rating of the company, riskier the loan is perceived to be. Thus, the bank is required to set aside more capital for a lower rated company than what it otherwise would do for a higher rated client. New international accounting norms, known as Basel III norms, require banks to maintain higher capital and higher liquidity. They also require a bank to set aside "buffer" capital to meet contingencies. As per the norms, a bank's total capital adequacy ratio should be 12% at any time, in which tier-I, or the core capital, should be at 9%. Capital adequacy is calculated by dividing total capital by risk-weighted assets. If the loans have been given to lower rated companies, risk weight goes up and capital adequacy falls.According to bankers, all loan decisions are now being assessed on the basis of the capital that needs to be set aside as provision against the loan and as a result, loans to lower rated companies are being avoided. According to a senior banker with a public sector bank, the capital adequacy situation is so precarious in some banks that if the risk weight increases a few basis points, the proposal gets cancelled. The banker did not wish to be named. One basis point is one hundredth of a percentage point. Bankers add that the Reserve Bank of India has also started strictly monitoring how banks are utilising their capital. Any big-ticket loan to lower rated companies is being questioned.In this scenario, banks are looking for safe bets, even if it means that profitability is being compromised. "About 25% of our loans this quarter was given to Navratna companies, who pay at base rate. This resulted in contraction of our net interest margin (NIM)," said Bank of India chairperson V.R. Iyer, while discussing the bank's first quarter results with the media. Bank of India's NIM, or the difference between yields on advances and cost of deposits, a key gauge of profitability, fell in the first quarter to 2.45% from 3.07% a year ago, as the bank focused on lending to highly rated customers.Analysts, however, say the strategy being followed by banks is short-sighted. "A high rated client will take loans at base rate and will not give any fee income to a bank. A bank will never be profitable that way. Besides, there are only so many PSU companies to chase. All banks cannot be chasing them all at a time. Fact is, the banks are badly hit by NPA and are afraid to lend now to big projects. They need capital, true, but they have become risk-averse," said a senior analyst with a local brokerage who did not wish to be named.Various estimates suggest that Indian banks would require more than Rs. 2 trillion of additional capital to have this kind of capital adequacy ratio by 2019. The central government, which owns the majority share of these banks, has been cutting down on its commitment to recapitalize the banks. In 2013-14, the government infused Rs. 14,000 crore in its banks. However, in 2014-15, the government will infuse just Rs. 11,200 crore.Which of the following statements is correct according to the passage?
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions