1. Two piles, one a bored cast-in-situ pile and another a precast driven pile, both of same length and diameter are constructed in a loose sand deposit. If the bearing capacity of the bored pile is Q1, and that of the precast driven pile is Q2, then which of the following is correct ?





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->What is known in the names of ‘Singing sand, Whistling sand and Barking sand’ ?....
QA->In a club 70% members read English news papers and 75% members read Malayalam news papers, while 20% do not read both papers. If 325 members read both the news papers, then the total numbers in the club is .........?....
QA->Which Indian sand artist won the People"s Choice Prize at the World Cup of Sand Sculpting 2014?....
QA->Nine million Pound lottery winner of Britain who became bored after he gave up his job following his lottery win has drunk himself to death recently. Name of that lottery winner?....
QA->The quadrantal bearing of a line is S40degree W; its whole circle bearing is :....
MCQ->Two piles, one a bored cast-in-situ pile and another a precast driven pile, both of same length and diameter are constructed in a loose sand deposit. If the bearing capacity of the bored pile is Q1, and that of the precast driven pile is Q2, then which of the following is correct ?....
MCQ->Consider the following statements pertaining to a pile group and a single pile at failure :1. In loose and medium dense sands, the failure load per pile in a group will generally be greater than the failure load for a single pile.2. In cohesive soils, the failure load per pile in a group will be greater than failure load for a single pile.3. For piles driven in dense sands, the failure load per pile in a group is greater than the failure load for a single pile.4. When the pile spacing is greater than 10 times the pile diameter, the failure loads per pile in a group and forly same in both sands and clays.Of these statements :....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
MCQ-> The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and policies for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations (MNCs). This debate has been stimulated by the international agreement on Trade Related Intellectual Property Rights (TRIPs), negotiated as part of the Uruguay Round. TRIPs, for the first time, seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations, and that this calls for a system of patents which gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorised copying or use. With strong support of their national governments, they were influential in shaping the agreement on TRIPs, which eventually emerged from the Uruguay Round. The current debate on TRIPs in India - as indeed elsewhere - echoes wider concerns about ‘privatisation’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism, argue that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private sector research are apprehensive that this will work to the disadvantage of farmers by making them more and more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities. Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. in our own experience, some of the early high yielding varieties (HYVs) of rice and wheat were found susceptible to widespread pest attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common pool resource management, maintaining ecological health and ensuring sustainability is both critical and also demanding in terms of technological challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment and other inputs produced by them. Knowledge and techniques for resource management are not ‘marketable’ in the same way as those inputs. Their application to land, water and forests has a long gestation and their efficacy depends on resolving difficult problems such as designing institutions for proper and equitable management of common pool resources. Public or quasi-public research institutions informed by broader, long-term concerns can only do such work. The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatisation of research. We need to address problems likely to arise on account of the public-private sector complementarity, and ensure that the public research system performs efficiently. Complementarity between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organisations). Moreover, as is increasingly recognised, accumulated stock of knowledge does not reside only in the scientific community and its academic publications, but is also widely diffused in traditions and folk knowledge of local communities all over. The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is a ‘public good’ freely accessible in the public domain and usable free of any charge. Various techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plants species (neem and turmeric are by now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protection for them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded organisations? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensation be for individuals or (or communities/institutions to which they belong? Should individual institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need serious detailed study to evolve reasonably satisfactory, fair and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co- operatives, universities, public trusts and a variety of non-governmental organisations (NGOs). Giving greater autonomy to research organisations from government control and giving non- government public institutions the space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.Which one of the following statements describes an important issue, or important issues, not being raised in the context of the current debate on IPRs?
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions