1. Let X be the number of successes follow B(n,p), then the distribution of failures follow:





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->40% of a number is added to 120, then the result is double of the number. What is the number?....
QA->In a Program Graph, ‘X’ is an if-then-else node. If the number of paths from start node to X is ‘p’ number of paths from if part to end node is ‘q’ and from else part to end node is ’r’, the total number of possible paths through X is :....
QA->75% of a number is added to 75, then the result is the number itself. What is the number ?....
QA->75% of a number is added to 75, then the result is the number itself. What is the number?....
QA->A cyclist goes 40 km towards East and then turning to right he goes 40 km. Again he turn to his left and goes 20 km. After this he turns to his left and goes 40 km, then again turns right and goes 10km. How far is he from his starting point?....
MCQ-> Read the following passage carefully and answer the questions given below it. Certain words/phrases have been printed in bold to help you locate them while answering some of the questions. The wisdom of learning from failure is incontrovertible. Yet organisations that do it well are extraordinarily rare. This gap is not due to a lack of commitment to learning. Managers in the vast majority of enterprises that I have studied over the past 20 years —pharmaceutical. financial services, product design, telecommunications, and construction companies: hospitals; and NASA’s space shuttle program, among others— genuinely wanted to help their organisations learn from failures to improve future performance. In some cases they and their teams had devoted many hours to afteraction reviews, postmortems, and the like. But time after time I saw that these painstaking efforts led to no real change. The reason: Those managers were thinking about failure the wrong way. Most executives I’ve talked to believe that failure is bad (of course!). They also believe that learning from it is pretty straightforward: Ask people to reflect on what they did wrong and exhort them to avoid similar mistakes in the future—or, better yet, assign a team to review and write a report on what happened and then distribute it throughout the organisation. These widely held beliefs are misguided. First, failure is not always bad. In organisational life it is sometimes bad, sometimes inevitable, and sometimes even good. Second, learning from organisational failures is anything but straightforward. The attitudes and activities required to effectively detect and analyze failures are in short supply in most companies, and the need for context-specific learning strategies is underappreciated. Or – ganisations need new and better ways to go beyond lessons that are superficial (“Procedures weren’t followed”) or self-serving (“The market just wasn’t ready for our great new product”). That means jettisoning old cultural beliefs and stereotypical notions of success and embracing failure’s lessons. Leaders can begin by understanding how the blame game gets in the way. The Blame Game Failure and fault are virtually inseparable in most households. organisations, and cultures. Every child learns at some point that admitting failure means taking the blame. That is why so few organisations have shifted to a culture of psychological safety in which the rewards of learning from failure can be fully realised. Executives I’ve interviewed in organisations as different as hospitals and investment banks admit to being torn: How can they respond constructively to failures without giving rise to an anything-goes attitude? If people aren’t blamed for failures, what will ensure that they try as hard as possible to do their best work? This concern is based on a false dichotomy. In actuality, a culture that makes it safe to admit and report on failure can—and in some organisational contexts must–coexist with high standards for performance. To understand why, look at the exhibit “A Spectrum of Reasons for Failure,” which lists causes ranging from deliberate deviation to thoughtful experimentation. Which of these causes involve blameworthy actions? Deliberate deviance, first on the list, obviously warrants blame. But inattention might not. If it results from a lack of effort, perhaps it’s blameworthy. But if it results from fatigue near the end of an overly long shift, the manager who assigned the shift is more at fault than the employee. As we go down the list, it gets more and more difficult to find blameworthy acts. In fact, a failure resulting from thoughtful experimentation that generates valuable information may actually be praiseworthy. When I ask executives to consider this spectrum and then to estimate how many of the failures in their organisations are truly blameworthy, their answers are usually in single digits—perhaps 2% to 5%. But when I ask how many are treated as blameworthy, they say (after a pause or a laugh) 70% to 90%. The unfortunate consequence is that many failures go unreported and their lessons are lost. Question : sophisticated understanding of failure’s causes and contexts will help to avoid the blame game and institute an effective strategy for learning from failure. Although an infinite number of things can go wrong in organisations, mistakes fall into three broad categories: preventable, complexity-related, and intelligent.Which of the following statement (s) is/are true in the context of the given passage ? I. Most executives believe that failure is bad and learning from it is pretty straightforward. II. The wisdom of learning from failure is disputable. III. Deliberate deviance, first on the list of the exhibit, “A Spectrum of Reasons for Failure” obviously warrants blame.....
MCQ->Let X be the number of successes follow B(n,p), then the distribution of failures follow:....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> Mathematicians are assigned a number called Erdos number (named after the famous mathematician, Paul Erdos). Only Paul Erdos himself has an Erdos number of zero. Any mathematician who has written a research paper with Erdos has an Erdos number of 1.For other mathematicians, the calculation of his/her Erdos number is illustrated below:Suppose that a mathematician X has co-authored papers with several other mathematicians. 'From among them, mathematician Y has the smallest Erdos number. Let the Erdos number of Y be y. Then X has an Erdos number of y+1. Hence any mathematician with no co-authorship chain connected to Erdos has an Erdos number of infinity. :In a seven day long mini-conference organized in memory of Paul Erdos, a close group of eight mathematicians, call them A, B, C, D, E, F, G and H, discussed some research problems. At the beginning of the conference, A was the only participant who had an infinite Erdos number. Nobody had an Erdos number less than that of F.On the third day of the conference F co-authored a paper jointly with A and C. This reduced the average Erdos number of the group of eight mathematicians to 3. The Erdos numbers of B, D, E, G and H remained unchanged with the writing of this paper. Further, no other co-authorship among any three members would have reduced the average Erdos number of the group of eight to as low as 3.• At the end of the third day, five members of this group had identical Erdos numbers while the other three had Erdos numbers distinct from each other.• On the fifth day, E co-authored a paper with F which reduced the group's average Erdos number by 0.5. The Erdos numbers of the remaining six were unchanged with the writing of this paper.• No other paper was written during the conference.The person having the largest Erdos number at the end of the conference must have had Erdos number (at that time):
 ....
MCQ-> Directions: Read the following passage carefully and answer the questions given below it. Certain words/phrases have been printed in bold to help you locate them while answering some of the questions. When times are hard, doomsayers are aplenty. The problem is that if you listen to them too carefully, you tend to overlook the most obvious signs of change. 2011 was a bad year. Can 2012 be any worse? Doomsday forecasts are the easiest to make these days. So let's try a contrarian's forecast instead. Let's start with the global economy. We have seen a steady flow of good news from the US. The employment situation seems to be improving rapidly and consumer sentiment, reflected in retail expenditures on discretionary items like electronics and clothes, has picked up. If these trends sustain, the US might post better growth numbers for 2012 than the 1.5 - 1.8 percent being forecast currently. Japan is likely to pull out of a recession in 2012 as post-earthquake reconstruction efforts gather momentum and the fiscal stimulus announced in 2011 begin to pay off. The consensus estimate for growth in Japan is a respectable 2 percent for 2012. The "hard landing' scenario for China remains and will remain a myth. Growth might decelerate further from the 9 percent that is expected to clock in 2011 but is unlikely to drop below 8 - 8.5 percent in 2012. Europe is certainly in a spot of trouble. It is perhaps already in recession and for 2012 it is likely to post mildly negative growth. The risk of implosion has dwindled over the last few months- peripheral economies like Greece, Italy and Spain have new governments in place and have made progress towards genuine economic reform. Even with some these positive factors in place, we have to accept the fact that global growth in 2012 will be tepid. But there is a flipside to this. Softer growth means lower demand for commodities, and this is likely to drive a correction in commodity prices. Lower commodity inflation will enable emerging market central banks to reverse their monetary stance. China, for instance, has already reversed its stance and have pared its reserve ratio twice. The RBI also seems poised for a reversal in its rate cycle as headline inflation seems well one its way to its target of 7 percent for March 2012. That said, oil might be an exception to the general trend in commodities. Rising geopolitical tensions, particularly the continuing face-off between Iran and the US, might lead to a spurt in prices. It might make sense for our oil companies to hedge this risk instead of buying oil in the spot market. As inflation fears abate, and emerging market central banks begin to cut rates, two things could happen. Lower commodity inflation would mean lower interest rates and better credit availability. This could set the floor to growth and slowly reverse the business cycle within these economies. Second, as the fear of untamed, runaway inflation in these economies abates, the global investor's comfort levels with their markets will increase. Which of the emerging markets will outperform and who will leave behind? In an environment in which global growth is likely to be weak, economies like India that have a powerful domestic consumption dynamic should lead; those dependent on exports should, prima facie, fall behind. Specifically for India, a fall in the exchange rate could not have come at a better time. It will help Indian exporters gain market share even if global trade remains depressed. More importantly, it could lead to massive import substitution that favours domestic producers.Let’s now focus on India and start with a caveat. It is important not to confuse a short run cyclical dip with a permanent derating of its long-term structural potential. The arithmetic is simple. Our growth rate can be in the range of 7-10 percent depending on policy action. Ten percent if we get everything right, 7 percent if we get it all wrong. Which policies and reforms are critical to taking us to our 10 percent potential? In judging this, let’s again be careful. Let’s not go by the laundry list of reforms that FIIs like to wave: The increase in foreign equity limits in foreign shareholding, greater voting rights for institutional shareholders in banks, FDI in retail, etc. These can have an impact only at the margin. We need not bend over backwards to appease the FIIs through these reforms they will invest in our markets when momentum picks up and will be the first to exit when the momentum flags, reforms or not. The reforms that we need are the ones that can actually raise our sustainable longterm growth rate. These have to come in areas like better targeting of subsidies, making projects in infrastructure viable so that they draw capital, raising the productivity of agriculture, improving healthcare and education, bringing the parallel economy under the tax net, implementing fundamental reforms in taxation like GST and the direct tax code and finally easing the myriad rules and regulations that make doing business in India such a nightmare. A number of these things do not require new legislation and can be done through executive order.Which of the following is not true according to the passage?
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions