1. Alarms from 3 different clocks sound after every 2, 4 and 6 hours, respectively. If the clocks are started at the same 6 time, how many times do the alarms ring together in 3 days?





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Show Similar Question And Answers
QA->A and B can do a work in 12 days. B and C together can do it in 15 days and C and A together in 20 days. If A, B, C work together, they will complete the work in ?....
QA->A and B can do a work in 12 days. B and C together can do it in 15 days and C and A together in 20 days. If A, B, C work together, they will complete the work in :....
QA->When A, B, C are work together they complete a job in 8 days. When A and C decided to do the work together they completed this job in 12 days. In how many days B alone can complete the work?....
QA->Anil can do a work in 12 days. Basheer can do it in 15 days. Chandran can do the same work in 20 days. If they all work together, the number of days need to complete the work is :....
QA->Six bells commence tolling together at regular intervals of 3, 6, 9, 12, 15 and 18 seconds respectively. In 30 minutes, how many times, do they toll together?....
MCQ->Alarms from 3 different clocks sound after every 2, 4 and 6 hours, respectively. If the clocks are started at the same 6 time, how many times do the alarms ring together in 3 days?....
MCQ-> Studies of the factors governing reading development in young children have achieved a remarkable degree of consensus over the past two decades. The consensus concerns the causal role of ‘phonological skills in young children’s reading progress. Children who have good phonological skills, or good ‘phonological awareness’ become good readers and good spellers. Children with poor phonological skills progress more poorly. In particular, those who have a specific phonological deficit are likely to be classified as dyslexic by the time that they are 9 or 10 years old.Phonological skills in young children can be measured at a number of different levels. The term phonological awareness is a global one, and refers to a deficit in recognising smaller units of sound within spoken words. Development work has shown that this deficit can be at the level of syllables, of onsets and rimes, or phonemes. For example, a 4-year old child might have difficulty in recognising that a word like valentine has three syllables, suggesting a lack of syllabic awareness. A five-year-old might have difficulty in recognizing that the odd work out in the set of words fan, cat, hat, mat is fan. This task requires an awareness of the sub-syllabic units of the onset and the rime. The onset corresponds to any initial consonants in a syllable words, and the rime corresponds to the vowel and to any following consonants. Rimes correspond to rhyme in single-syllable words, and so the rime in fan differs from the rime in cat, hat and mat. In longer words, rime and rhyme may differ. The onsets in val:en:tine are /v/ and /t/, and the rimes correspond to the selling patterns ‘al’, ‘en’ and’ ine’.A six-year-old might have difficulty in recognising that plea and pray begin with the same initial sound. This is a phonemic judgement. Although the initial phoneme /p/ is shared between the two words, in plea it is part of the onset ‘pl’ and in pray it is part if the onset ‘pr’. Until children can segment the onset (or the rime), such phonemic judgements are difficult for them to make. In fact, a recent survey of different developmental studies has shown that the different levels of phonological awareness appear to emerge sequentially. The awareness of syllables, onsets, and rimes appears to merge at around the ages of 3 and 4, long before most children go to school. The awareness of phonemes, on the other hand, usually emerges at around the age of 5 or 6, when children have been taught to read for about a year. An awareness of onsets and rimes thus appears to be a precursor of reading, whereas an awareness of phonemes at every serial position in a word only appears to develop as reading is taught. The onset-rime and phonemic levels of phonological structure, however, are not distinct. Many onsets in English are single phonemes, and so are some rimes (e.g. sea, go, zoo).The early availability of onsets and rimes is supported by studies that have compared the development of phonological awareness of onsets, rimes, and phonemes in the same subjects using the same phonological awareness tasks. For example, a study by Treiman and Zudowski used a same/different judgement task based on the beginning or the end sounds of words. In the beginning sound task, the words either began with the same onset, as in plea and plank, or shared only the initial phoneme, as in plea and pray. In the end-sound task, the words either shared the entire rime, as in spit and wit, or shared only the final phoneme, as in rat and wit. Treiman and Zudowski showed that four- and five-year-old children found the onset-rime version of the same/different task significantly easier than the version based on phonemes. Only the sixyear- olds, who had been learning to read for about a year, were able to perform both versions of the tasks with an equal level of success.From the following statements, pick out the true statement according to the passage.
 ....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.But we have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day-to-day. On the contrary, they actively seek certainty and stability. They want to know they will be safe.Even so we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us Out of business — so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? Structure may not have changed much. It may just be the speed with which we do things.Change is over-rated, anyway, consider the automobile. It’s an especially valuable example, because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with an internal combustion, gasoline-powered engine, four wheels with rubber types, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 1 8 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with an internal combustion, gasoline-powered engine, four wheels with rubber tyres a foot operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!That’s not a hell of a lot of return for the money. Ford evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities — making for an almost impregnable entry barrier.Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.Taken together, this lack of real change has come to man that in travel — whether driving or flying — time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.At the same time, moving around in cars or aero-planes becomes less and less efficient all the time Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become mobile or photographic film, which also required an entire century to change.The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design. (The same existed in the 1970s in Japan, and in the 1980s in France.) So for 40 years we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors, and would be loath to cannibalize their expertise, along with most of their factoriesAccording to the above passage, which of the following statements is true?
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions