1. Which one of the following statements regarding formation of Indian National congress are true? 1.The Indian National Congress was formed at a National Convention held in Calcutta in December 1885 under the presidency of Motilal Nehru 2.The Safety Valve theory rregarding the formation of Indian National Congress emerged from a biography of A.O Hume written ny William Wedderburn 3.An early decision was that President would be from the same region where the session was to be held 4.W.C Banerjee was the first president of Indian National Congress Select the correct answer using the codes below





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->…………….means the annual financial statements and other statements prescribed under Rule 65 of Kerala Panchayat Raj (Accounts) Rules, 2011?....
QA->WHICH VICEROY OF BRITISH INDIA FORMULATED SAFETY VALVE THEORY....
QA->Consider a Program Graph (PG) with statements as nodes and control as edges. Which of the following is not true for any PG?....
QA->Indian National Congress held its first session in 1885 at?....
QA->The part of an overhead valve mechanism arranged between the valve tappet and rocker arm is :....
MCQ->Which one of the following statements regarding formation of Indian National congress are true? 1.The Indian National Congress was formed at a National Convention held in Calcutta in December 1885 under the presidency of Motilal Nehru 2.The Safety Valve theory rregarding the formation of Indian National Congress emerged from a biography of A.O Hume written ny William Wedderburn 3.An early decision was that President would be from the same region where the session was to be held 4.W.C Banerjee was the first president of Indian National Congress Select the correct answer using the codes below....
MCQ-> Our propensity to look out for regularities, and to impose laws upon nature, leads to the psychological phenomenon of dogmatic thinking or, more generally, dogmatic behaviour: we expect regularities everywhere and attempt to find them even where there are none; events which do not yield to these attempts we are inclined to treat as a kind of `background noise’; and we stick to our expectations even when they are inadequate and we ought to accept defeat. This dogmatism is to some extent necessary. It is demanded by a situation which can only be dealt with by forcing our conjectures upon the world. Moreover, this dogmatism allows us to approach a good theory in stages, by way of approximations: if we accept defeat too easily, we may prevent ourselves from finding that we were very nearly right.It is clear that this dogmatic attitude; which makes us stick to our first impressions, is indicative of a strong belief; while a critical attitude, which is ready to modify its tenets, which admits doubt and demands tests, is indicative of a weaker belief. Now according to Hume’s theory, and to the popular theory, the strength of a belief should be a product of repetition; thus it should always grow with experience, and always be greater in less primitive persons. But dogmatic thinking, an uncontrolled wish to impose regularities, a manifest pleasure in rites and in repetition as such, is characteristic of primitives and children; and increasing experience and maturity sometimes create an attitude of caution and criticism rather than of dogmatism.My logical criticism of Hume’s psychological theory, and the considerations connected with it, may seem a little removed from the field of the philosophy of science. But the distinction between dogmatic and critical thinking, or the dogmatic and the critical attitude, brings us right back to our central problem. For the dogmatic attitude is clearly related to the tendency to verify our laws and schemata by seeking to apply them and to confirm them, even to the point of neglecting refutations, whereas the critical attitude is one of readiness to change them - to test them; to refute them; to falsify them, if possible. This suggests that we may identify the critical attitude with the scientific attitude, and the dogmatic attitude with the one which we have described as pseudo-scientific. It further suggests that genetically speaking the pseudo-scientific attitude is more primitive than, and prior to, the scientific attitude: that it is a pre-scientific attitude. And this primitivity or priority also has its logical aspect. For the critical attitude is not so much opposed to the dogmatic attitude as super-imposed upon it: criticism must be directed against existing and influential beliefs in need of critical revision – in other words, dogmatic beliefs. A critical attitude needs for its raw material, as it were, theories or beliefs which are held more or less dogmatically.Thus, science must begin with myths, and with the criticism of myths; neither with the collection of observations, nor with the invention of experiments, but with the critical discussion of myths, and of magical techniques and practices. The scientific tradition is distinguished from the pre-scientific tradition in having two layers. Like the latter, it passes on its theories; but it also passes on a critical attitude towards them. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them.The critical attitude, the tradition of free discussion of theories with the aim of discovering their weak spots so that they may be improved upon, is the attitude of reasonableness, of rationality. From the point of view here developed, all laws, all theories, remain essentially tentative, or conjectural, or hypothetical, even when we feel unable to doubt them any longer. Before a theory has been refuted we can never know in what way it may have to be modified.In the context of science, according to the passage, the interaction of dogmatic beliefs and critical attitude can be best described as:
 ....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
MCQ-> Read the following passage carefully and answer the questions given below it. Certain words/phrases have been printed in ‘’bold’’ to help you locate them while answering some of the questions.As increasing dependence on information systems develops, the need for such system to be reliable and secure also becomes more essential. As growing numbers of ordinary citizens use computer networks for banking, shopping, etc., network security in potentially a ‘’massive’’ problem. Over the last few years, the need for computer and information security system has become increasingly evident, as web sites are being defaced with greater frequency, more and more denial-of-service attacks are being reported, credit card information is being stolen, there is increased sophistication of hacking tools that are openly available to the public on the Internet, and there is increasing damage being caused by viruses and worms to critical information system resources.At the organizational level, institutional mechanism have to be designed in order to review policies, practices, measures and procedures to review e-security regularly and assess whether these are appropriate to their environment. It would be helpful if organizations share information about threats and vulnerabilities, and implement procedures of rapid and effective cooperation to prevent, detect and respond to security incidents. As new threats and vulnerabilities are continuously discovered there is a strong need for co-operation among organizations and, if necessary, we could also consider cross-border information sharing. We need to understand threats and dangers that could be ‘’vulnerable’’ to and the steps that need to be taken to ‘’mitigate’’ these vulnerabilities. We need to understand access control systems and methodology, telecommunications and network security, and security management practise. We should be well versed in the area of application and systems development security, cryptography, operations security and physical security.The banking sector is ‘’poised’’ for more challenges in the near future. Customers of banks can now look forward to a large array of new offerings by banks, from an ‘’era’’ of mere competition, banks are now cooperating among themselves so that the synergistic benefits are shared among all the players. This would result in the information of shared payment networks (a few shared ATM networks have already been commissioned by banks), offering payment services beyond the existing time zones. The Reserve Bank is also facilitating new projects such as the Multi Application Smart Card Project which, when implemented, would facilitate transfer of funds using electronic means and in a safe and secure manner across the length and breadth of the country, with reduced dependence on paper currency. The opportunities of e-banking or e-power is general need to be harnessed so that banking is available to all customers in such a manner that they would feel most convenient, and if required, without having to visit a branch of a bank. All these will have to be accompanied with a high level of comfort, which again boils down to the issue of e-security.One of the biggest advantages accruing to banks in the future would be the benefits that arise from the introduction of Real Time Gross Settlement (RTGS). Funds management by treasuries of banks would be helped greatly by RTGS. With almost 70 banks having joined the RTGS system, more large value funds transfer are taking place through this system. The implementation of Core Banking solutions by the banks is closely related to RTGS too. Core Banking will make anywhere banking a reality for customers of each bank. while RTGS bridges the need for inter-bank funds movement. Thus, the days of depositing a cheque for collection and a long wait for its realization would soon be a thing of the past for those customers who would opt for electronic movement of funds, using the RTGS system, where the settlement would be on an almost ‘’instantaneous’’ basis. Core Banking is already in vogue in many private sector and foreign banks; while its implementation is at different stages amongst the public sector banks.IT would also facilitate better and more scientific decision-making within banks. Information system now provide decision-makers in banks with a great deal of information which, along with historical data and trend analysis, help in the building up of efficient Management Information Systems. This, in turn, would help in better Asset Liability Management (ALM) which, today’s world of hairline margins is a key requirement for the success of banks in their operational activities. Another benefit which e-banking could provide for relates to Customer Relationship Management (CRM). CRM helps in stratification of customers and evaluating customer needs on a holistic basis which could be paving the way for competitive edge for banks and complete customer care for customer of banks.The content of the passage ‘’mainly’’ emphasizes----
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions