1. The best design to test association between risk factor and disease





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->In case of ………factoring, the factor assumes the risk of bad debts.....
QA->Scientistshave discovered a new safer gene therapy to reduce the risk of which disease?....
QA->Who has been elected the first President of the World Association of Newspapers and News Publishers (WAN-IFRA), the organization created by the July merger of the World Association of Newspapers and IFRA, the research and service organization for the news publishing industry?....
QA->The disease sometimes referred to as Bleeder's disease or Christmas disease is ?....
QA->Which vitamin deficiency disease is also known in the names of "Barlow"s Disease & Cheadle"s disease?....
MCQ-> Throughout human history the leading causes of death have been infection and trauma, Modem medicine has scored significant victories against both, and the major causes of ill health and death are now the chronic degenerative diseases, such as coronary artery disease, arthritis, osteoporosis, Alzheimer’s, macular degeneration, cataract and cancer. These have a long latency period before symptoms appear and a diagnosis is made. It follows that the majority of apparently healthy people are pre-ill.But are these conditions inevitably degenerative? A truly preventive medicine that focused on the pre-ill, analyzing the metabolic errors which lead to clinical illness, might be able to correct them before the first symptom. Genetic risk factors are known for all the chronic degenerative diseases, and are important to the individuals who possess them. At the population level, however, migration studies confirm that these illnesses are linked for the most part to lifestyle factors — exercise, smoking and nutrition. Nutrition is the easiest of these to change, and the most versatile tool for affecting the metabolic changes needed to tilt the balance away from disease.Many national surveys reveal that malnutrition is common in developed countries. This is not the calorie and/or micronutrient deficiency associated with developing nations (type A malnutrition); but multiple micronutrient depletion, usually combined with calorific balance or excess (Type B malnutrition). The incidence and severity of Type B malnutrition will be shown to be worse if newer micronutrient groups such as the essential fatty acids, xanthophylls and falconoid are included in the surveys. Commonly ingested levels of these micronutrients seem to be far too low in many developed countries.There is now considerable evidence that Type B malnutrition is a major cause of chronic degenerative diseases. If this is the case, then t is logical to treat such diseases not with drugs but with multiple micronutrient repletion, or pharmaco-nutrition’. This can take the form of pills and capsules — ‘nutraceuticals’, or food formats known as ‘functional foods’, This approach has been neglected hitherto because it is relatively unprofitable for drug companies — the products are hard to patent — and it is a strategy which does not sit easily with modem medical interventionism. Over the last 100 years, the drug industry has invested huge sums in developing a range of subtle and powerful drugs to treat the many diseases we are subject to. Medical training is couched in pharmaceutical terms and this approach has provided us with an exceptional range of therapeutic tools in the treatment of disease and in acute medical emergencies. However, the pharmaceutical model has also created an unhealthy dependency culture, in which relatively few of us accept responsibility for maintaining our own health. Instead, we have handed over this responsibility to health professionals who know very little about health maintenance, or disease prevention.One problem for supporters of this argument is lack of the right kind of hard evidence. We have a wealth of epidemiological data linking dietary factors to health profiles/ disease risks, and a great deal of information on mechanism: how food factors interact with our biochemistry. But almost all intervention studies with micronutrients, with the notable exception of the omega 3 fatty acids, have so far produced conflicting or negative results. In other words, our science appears to have no predictive value. Does this invalidate the science? Or are we simply asking the wrong questions?Based on pharmaceutical thinking, most intervention studies have attempted to measure the impact of a single micronutrient on the incidence of disease. The classical approach says that if you give a compound formula to test subjects and obtain positive results, you cannot know which ingredient is exerting the benefit, so you must test each ingredient individually. But in the field of nutrition, this does not work. Each intervention on its own will hardly make enough difference to be measured. The best therapeutic response must therefore combine micronutrients to normalise our internal physiology. So do we need to analyse each individual’s nutritional status and then tailor a formula specifically for him or her? While we do not have the resources to analyze millions of individual cases, there is no need to do so. The vast majority of people are consuming suboptimal amounts of most micronutrients, and most of the micronutrients concerned are very safe. Accordingly, a comprehensive and universal program of micronutrient support is probably the most cost-effective and safest way of improving the general health of the nation.The author recommends micronutrient-repletion for large-scale treatment of chronic degenerative diseases because
 ....
MCQ-> Read the following passage carefully and answer the questions given below it. Certain words/phrases have been printed in bold tohelp you locate them while answering some of the questions. During the last few years, a lot of hype has been heaped on the BRICS (Brazil, Russia, India, China, and South Africa). With their large populations and rapid growth, these countries, so the argument goes, will soon become some of the largest economies in the world and, in the case of China, the largest of all by as early as 2020. But the BRICS, as well as many other emerging-market economieshave recently experienced a sharp economic slowdown. So, is the honeymoon over? Brazil’s GDP grew by only 1% last year, and may not grow by more than 2% this year, with its potential growth barely above 3%. Russia’s economy may grow by barely 2% this year, with potential growth also at around 3%, despite oil prices being around $100 a barrel. India had a couple of years of strong growth recently (11.2% in 2010 and 7.7% in 2011) but slowed to 4% in 2012. China’s economy grew by 10% a year for the last three decades, but slowed to 7.8% last year and risks a hard landing. And South Africa grew by only 2.5% last year and may not grow faster than 2% this year. Many other previously fast-growing emerging-market economies – for example, Turkey, Argentina, Poland, Hungary, and many in Central and Eastern Europe are experiencing a similar slowdown. So, what is ailing the BRICS and other emerging markets? First, most emerging-market economies were overheating in 2010-2011, with growth above potential and inflation rising and exceeding targets. Many of them thus tightened monetary policy in 2011, with consequences for growth in 2012 that have carried over into this year. Second, the idea that emerging-market economies could fully decouple from economic weakness in advanced economies was farfetched : recession in the eurozone, near-recession in the United Kingdom and Japan in 2011-2012, and slow economic growth in the United States were always likely to affect emerging market performance negatively – via trade, financial links, and investor confidence. For example, the ongoing euro zone downturn has hurt Turkey and emergingmarket economies in Central and Eastern Europe, owing to trade links. Third, most BRICS and a few other emerging markets have moved toward a variant of state capitalism. This implies a slowdown in reforms that increase the private sector’s productivity and economic share, together with a greater economic role for state-owned enterprises (and for state-owned banks in the allocation of credit and savings), as well as resource nationalism, trade protectionism, import substitution industrialization policies, and imposition of capital controls. This approach may have worked at earlier stages of development and when the global financial crisis caused private spending to fall; but it is now distorting economic activity and depressing potential growth. Indeed, China’s slowdown reflects an economic model that is, as former Premier Wen Jiabao put it, “unstable, unbalanced, uncoordinated, and unsustainable,” and that now is adversely affecting growth in emerging Asia and in commodity-exporting emerging markets from Asia to Latin America and Africa. The risk that China will experience a hard landing in the next two years may further hurt many emerging economies. Fourth, the commodity super-cycle that helped Brazil, Russia, South Africa, and many other commodity-exporting emerging markets may be over. Indeed, a boom would be difficult to sustain, given China’s slowdown, higher investment in energysaving technologies, less emphasis on capital-and resource-oriented growth models around the world, and the delayed increase in supply that high prices induced. The fifth, and most recent, factor is the US Federal Reserve’s signals that it might end its policy of quantitative easing earlier than expected, and its hints of an even tual exit from zero interest rates. both of which have caused turbulence in emerging economies’ financial markets. Even before the Fed’s signals, emergingmarket equities and commodities had underperformed this year, owing to China’s slowdown. Since then, emerging-market currencies and fixed-income securities (government and corporate bonds) have taken a hit. The era of cheap or zerointerest money that led to a wall of liquidity chasing high yields and assets equities, bonds, currencies, and commodities – in emerging markets is drawing to a close. Finally, while many emerging-market economies tend to run current-account surpluses, a growing number of them – including Turkey, South Africa, Brazil, and India – are running deficits. And these deficits are now being financed in riskier ways: more debt than equity; more short-term debt than longterm debt; more foreign-currency debt than local-currency debt; and more financing from fickle cross-border interbank flows. These countries share other weaknesses as well: excessive fiscal deficits, abovetarget inflation, and stability risk (reflected not only in the recent political turmoil in Brazil and Turkey, but also in South Africa’s labour strife and India’s political and electoral uncertainties). The need to finance the external deficit and to avoid excessive depreciation (and even higher inflation) calls for raising policy rates or keeping them on hold at high levels. But monetary tightening would weaken already-slow growth. Thus, emerging economies with large twin deficits and other macroeconomic fragilities may experience further downward pressure on their financial markets and growth rates. These factors explain why growth in most BRICS and many other emerging markets has slowed sharply. Some factors are cyclical, but others – state capitalism, the risk of a hard landing in China, the end of the commodity supercycle -are more structural. Thus, many emerging markets’ growth rates in the next decade may be lower than in the last – as may the outsize returns that investors realised from these economies’ financial assets (currencies, equities. bonds, and commodities). Of course, some of the better-managed emerging-market economies will continue to experitnce rapid growth and asset outperformance. But many of the BRICS, along with some other emerging economies, may hit a thick wall, with growth and financial markets taking a serious beating.Which of the following statement(s) is/are true as per the given information in the passage ? A. Brazil’s GDP grew by only 1% last year, and is expected to grow by approximately 2% this year. B. China’s economy grew by 10% a year for the last three decades but slowed to 7.8% last year. C. BRICS is a group of nations — Barzil, Russia, India China and South Africa.....
MCQ-> Read the following passage carefully and answer the questions given at the end.Passage 4Public sector banks (PSBs) are pulling back on credit disbursement to lower rated companies, as they keep a closer watch on using their own scarce capital and the banking regulator heightens its scrutiny on loans being sanctioned. Bankers say the Reserve Bank of India has started strictly monitoring how banks are utilizing their capital. Any big-ticket loan to lower rated companies is being questioned. Almost all large public sector banks that reported their first quarter results so far have showed a contraction in credit disbursal on a year-to-date basis, as most banks have shifted to a strategy of lending largely to government-owned "Navratna" companies and highly rated private sector companies. On a sequential basis too, banks have grown their loan book at an anaemic rate.To be sure, in the first quarter, loan demand is not quite robust. However, in the first quarter last year, banks had healthier loan growth on a sequential basis than this year. The country's largest lender State Bank of India grew its loan book at only 1.21% quarter-on-quarter. Meanwhile, Bank of Baroda and Punjab National Bank shrank their loan book by 1.97% and 0.66% respectively in the first quarter on a sequential basis.Last year, State Bank of India had seen sequential loan growth of 3.37%, while Bank of Baroda had seen a smaller contraction of 0.22%. Punjab National Bank had seen a growth of 0.46% in loan book between the January-March and April-June quarters last year. On a year-to-date basis, SBI's credit growth fell more than 2%, Bank of Baroda's credit growth contracted 4.71% and Bank of India's credit growth shrank about 3%. SBI chief Arundhati Bhattacharya said the bank's year-to-date credit growth fell as the bank focused on ‘A’ rated customers. About 90% of the loans in the quarter were given to high-rated companies. "Part of this was a conscious decision and part of it is because we actually did not get good fresh proposals in the quarter," Bhattacharya said.According to bankers, while part of the credit contraction is due to the economic slowdown, capital constraints and reluctance to take on excessive risk has also played a role. "Most of the PSU banks are facing pressure on capital adequacy. It is challenging to maintain 9% core capital adequacy. The pressure on monitoring capital adequacy and maintaining capital buffer is so strict that you cannot grow aggressively," said Rupa Rege Nitsure, chief economist at Bank of Baroda.Nitsure said capital conservation pressures will substantially cut down "irrational expansion of loans" in some smaller banks, which used to grow at a rate much higher than the industry average. The companies coming to banks, in turn, will have to make themselves more creditworthy for banks to lend. "The conservation of capital is going to inculcate a lot of discipline in both banks and borrowers," she said.For every loan that a bank disburses, some amount of money is required to be set aside as provision. Lower the credit rating of the company, riskier the loan is perceived to be. Thus, the bank is required to set aside more capital for a lower rated company than what it otherwise would do for a higher rated client. New international accounting norms, known as Basel III norms, require banks to maintain higher capital and higher liquidity. They also require a bank to set aside "buffer" capital to meet contingencies. As per the norms, a bank's total capital adequacy ratio should be 12% at any time, in which tier-I, or the core capital, should be at 9%. Capital adequacy is calculated by dividing total capital by risk-weighted assets. If the loans have been given to lower rated companies, risk weight goes up and capital adequacy falls.According to bankers, all loan decisions are now being assessed on the basis of the capital that needs to be set aside as provision against the loan and as a result, loans to lower rated companies are being avoided. According to a senior banker with a public sector bank, the capital adequacy situation is so precarious in some banks that if the risk weight increases a few basis points, the proposal gets cancelled. The banker did not wish to be named. One basis point is one hundredth of a percentage point. Bankers add that the Reserve Bank of India has also started strictly monitoring how banks are utilising their capital. Any big-ticket loan to lower rated companies is being questioned.In this scenario, banks are looking for safe bets, even if it means that profitability is being compromised. "About 25% of our loans this quarter was given to Navratna companies, who pay at base rate. This resulted in contraction of our net interest margin (NIM)," said Bank of India chairperson V.R. Iyer, while discussing the bank's first quarter results with the media. Bank of India's NIM, or the difference between yields on advances and cost of deposits, a key gauge of profitability, fell in the first quarter to 2.45% from 3.07% a year ago, as the bank focused on lending to highly rated customers.Analysts, however, say the strategy being followed by banks is short-sighted. "A high rated client will take loans at base rate and will not give any fee income to a bank. A bank will never be profitable that way. Besides, there are only so many PSU companies to chase. All banks cannot be chasing them all at a time. Fact is, the banks are badly hit by NPA and are afraid to lend now to big projects. They need capital, true, but they have become risk-averse," said a senior analyst with a local brokerage who did not wish to be named.Various estimates suggest that Indian banks would require more than Rs. 2 trillion of additional capital to have this kind of capital adequacy ratio by 2019. The central government, which owns the majority share of these banks, has been cutting down on its commitment to recapitalize the banks. In 2013-14, the government infused Rs. 14,000 crore in its banks. However, in 2014-15, the government will infuse just Rs. 11,200 crore.Which of the following statements is correct according to the passage?
 ....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> Read the passage carefully and answer the given questionsThe complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. . . . The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. . . .Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases - those that the current forest gets wrong. This ensures even more diversity and accurate forests.Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. . . . That’s not likely to lead to breakthroughs.Which of the following conditions, if true, would invalidate the passage’s main argument?
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions