1. A team ofresearchers recently created a three-dimensional lung, called as organoids.This technique could be used to study specifically…..

Answer: Idiopathic

Reply

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->A team ofresearchers recently created a three-dimensional lung, called as organoids.This technique could be used to study specifically…......
QA->The renowned South Indian playback singer who had rendered nearly 7000 songs in many languages including Tamil, Kannada, Telugu, Hindi, Malayalam…etc, died recently due to cardiac arrest followed by intestinal lung infection?....
QA->Two dimensional chart that plots the activity of a unit on the Y-axis versus the time on the X- axis:....
QA->Which team created a unique record in the match against Rajasthan by getting all out for a meager 21 runs, the lowest total by any team in the Ranji Trophy matches?....
QA->SARS (Severe Acute Respiratory Syndrome) a lung disease; is caused by ?....
MCQ-> K, L, M, N, P, Q, R, S, U and W are the only ten members in a department. There is a proposal to form a team from within the members of the department, subject to the following conditions:[list=1][*] A team must include exactly one among P,R and S.[*] A team must include either M or Q, but not both.[*] If a team includes K, then it must also include L, and vice versa.[*] If a team includes one among S, U and W, then it should also include the other two.[*] L and N cannot be members of the same team.[*] L and U cannot be members of the same team.[/list]The size of a team is defined as the number of members in the team.What could be the size of a team that includes K?
 ...
MCQ-> Study the following information carefully and answer the questions given below :A. B, C. D, F, G and H are seven football players each playing for a different team. viz. Green.- Red and Blue, with at least two of them in each of these teams. Each of them likes a fruit, viz. Apple. Guava. Banana. Orange, Mango. Papaya and Watermelon, not necessarily in the same order. B plays with F in team Blue and he likes Mango. None of those who play for either team Red or team Green likes either Guava or Banana. D plays with only the one who likes Watermelon. G likes Papaya and he plays in team Red. The one who likes Orange does not play in team Red. 1-1 likes Watermelon and he plays for team Green. A likes Apple and he plays for team Red. C does not like Guava.Which of the following players play for team Red ?
 ...
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ...
MCQ-> Read the passage carefully and answer the given questionsThe complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. . . . The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. . . .Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases - those that the current forest gets wrong. This ensures even more diversity and accurate forests.Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. . . . That’s not likely to lead to breakthroughs.Which of the following conditions, if true, would invalidate the passage’s main argument?
 ...
MCQ-> In the modern scientific story, light was created not once but twice. The first time was in the Big Bang, when the universe began its existence as a glowing, expanding, fireball, which cooled off into darkness after a few million years. The second time was hundreds of millions of years later, when the cold material condensed into dense suggests under the influence of gravity, and ignited to become the first stars.Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightements the cosmic ‘Dark Age’. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organized themselves into galaxies — or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.Now two independent groups of astronomers, one led by Robert Becker of the University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backwards enough in time) to observe the closing days of the Dark age.The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago, if astronomers are to have any hope of unravelling them they must study objects that are at least 13 billion light years away. The best prospects are quasars, because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of Dark Age, even quasars are extremely rare and faint.Recently some members of Dr Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Keck telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965, when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.The fog prolonged the period of darkness until the heat from the first stars and quasars had the chance to ionise the hydrogen (breaking it into its constituent parts, protons and electrons). Ionised hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the ‘Epoch of Re-ionisation’. Because the ultraviolet shadow is visible only in the most distant of the four quasars, Dr Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and oneseventh of its current size.In the passage, the Dark Age refers to
 ...
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions