1. Which of the below statement/s regarding COP 21 is/are correct: 1. India to ratify COP 21 global climate agreement. 140 other nations would also ratify the COP-21 Global Climate Agreement. A high-level signing ceremony to be convened at the UN Headquarters in New York





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->The Union Cabinet on 20 April 2016 approved signing of an agreement adopted atthe 21st Conference of Parties (COP21) for dealing with climate change. What isthe agreement called?....
QA->The United Nations Monetary and Financial Conference; a gathering of delegates from 44 nations that convened from July 1 to 22; 1944 in Bretton Woods; New Hampshire is better known as ?....
QA->Before signing the paper on which issue special care should be given by the signing authority:....
QA->The 2015 United Nations Climate Change Conference, COP 21 or CMP 11 was held in?....
QA->The United Nations climate change conference 2013, COP 19, was held in? Warsaw,....
MCQ->Which of the below statement/s regarding COP 21 is/are correct: 1. India to ratify COP 21 global climate agreement. 140 other nations would also ratify the COP-21 Global Climate Agreement. A high-level signing ceremony to be convened at the UN Headquarters in New York....
MCQ-> The broad scientific understanding today is that our planet is experiencing a warming trend over and above natural and normal variations that is almost certainly due to human activities associated with large-scale manufacturing. The process began in the late 1700s with the Industrial Revolution, when manual labor, horsepower, and water power began to be replaced by or enhanced by machines. This revolution, over time, shifted Britain, Europe, and eventually North America from largely agricultural and trading societies to manufacturing ones, relying on machinery and engines rather than tools and animals.The Industrial Revolution was at heart a revolution in the use of energy and power. Its beginning is usually dated to the advent of the steam engine, which was based on the conversion of chemical energy in wood or coal to thermal energy and then to mechanical work primarily the powering of industrial machinery and steam locomotives. Coal eventually supplanted wood because, pound for pound, coal contains twice as much energy as wood (measured in BTUs, or British thermal units, per pound) and because its use helped to save what was left of the world's temperate forests. Coal was used to produce heat that went directly into industrial processes, including metallurgy, and to warm buildings, as well as to power steam engines. When crude oil came along in the mid- 1800s, still a couple of decades before electricity, it was burned, in the form of kerosene, in lamps to make light replacing whale oil. It was also used to provide heat for buildings and in manufacturing processes, and as a fuel for engines used in industry and propulsion.In short, one can say that the main forms in which humans need and use energy are for light, heat, mechanical work and motive power, and electricity which can be used to provide any of the other three, as well as to do things that none of those three can do, such as electronic communications and information processing. Since the Industrial Revolution, all these energy functions have been powered primarily, but not exclusively, by fossil fuels that emit carbon dioxide (CO2), To put it another way, the Industrial Revolution gave a whole new prominence to what Rochelle Lefkowitz, president of Pro-Media Communications and an energy buff, calls "fuels from hell" - coal, oil, and natural gas. All these fuels from hell come from underground, are exhaustible, and emit CO2 and other pollutants when they are burned for transportation, heating, and industrial use. These fuels are in contrast to what Lefkowitz calls "fuels from heaven" -wind, hydroelectric, tidal, biomass, and solar power. These all come from above ground, are endlessly renewable, and produce no harmful emissions.Meanwhile, industrialization promoted urbanization, and urbanization eventually gave birth to suburbanization. This trend, which was repeated across America, nurtured the development of the American car culture, the building of a national highway system, and a mushrooming of suburbs around American cities, which rewove the fabric of American life. Many other developed and developing countries followed the American model, with all its upsides and downsides. The result is that today we have suburbs and ribbons of highways that run in, out, and around not only America s major cities, but China's, India's, and South America's as well. And as these urban areas attract more people, the sprawl extends in every direction.All the coal, oil, and natural gas inputs for this new economic model seemed relatively cheap, relatively inexhaustible, and relatively harmless-or at least relatively easy to clean up afterward. So there wasn't much to stop the juggernaut of more people and more development and more concrete and more buildings and more cars and more coal, oil, and gas needed to build and power them. Summing it all up, Andy Karsner, the Department of Energy's assistant secretary for energy efficiency and renewable energy, once said to me: "We built a really inefficient environment with the greatest efficiency ever known to man."Beginning in the second half of the twentieth century, a scientific understanding began to emerge that an excessive accumulation of largely invisible pollutants-called greenhouse gases - was affecting the climate. The buildup of these greenhouse gases had been under way since the start of the Industrial Revolution in a place we could not see and in a form we could not touch or smell. These greenhouse gases, primarily carbon dioxide emitted from human industrial, residential, and transportation sources, were not piling up along roadsides or in rivers, in cans or empty bottles, but, rather, above our heads, in the earth's atmosphere. If the earth's atmosphere was like a blanket that helped to regulate the planet's temperature, the CO2 buildup was having the effect of thickening that blanket and making the globe warmer.Those bags of CO2 from our cars float up and stay in the atmosphere, along with bags of CO2 from power plants burning coal, oil, and gas, and bags of CO2 released from the burning and clearing of forests, which releases all the carbon stored in trees, plants, and soil. In fact, many people don't realize that deforestation in places like Indonesia and Brazil is responsible for more CO2 than all the world's cars, trucks, planes, ships, and trains combined - that is, about 20 percent of all global emissions. And when we're not tossing bags of carbon dioxide into the atmosphere, we're throwing up other greenhouse gases, like methane (CH4) released from rice farming, petroleum drilling, coal mining, animal defecation, solid waste landfill sites, and yes, even from cattle belching. Cattle belching? That's right-the striking thing about greenhouse gases is the diversity of sources that emit them. A herd of cattle belching can be worse than a highway full of Hummers. Livestock gas is very high in methane, which, like CO2, is colorless and odorless. And like CO2, methane is one of those greenhouse gases that, once released into the atmosphere, also absorb heat radiating from the earth's surface. "Molecule for molecule, methane's heat-trapping power in the atmosphere is twenty-one times stronger than carbon dioxide, the most abundant greenhouse gas.." reported Science World (January 21, 2002). “With 1.3 billion cows belching almost constantly around the world (100 million in the United States alone), it's no surprise that methane released by livestock is one of the chief global sources of the gas, according to the U.S. Environmental Protection Agency ... 'It's part of their normal digestion process,' says Tom Wirth of the EPA. 'When they chew their cud, they regurgitate [spit up] some food to rechew it, and all this gas comes out.' The average cow expels 600 liters of methane a day, climate researchers report." What is the precise scientific relationship between these expanded greenhouse gas emissions and global warming? Experts at the Pew Center on Climate Change offer a handy summary in their report "Climate Change 101. " Global average temperatures, notes the Pew study, "have experienced natural shifts throughout human history. For example; the climate of the Northern Hemisphere varied from a relatively warm period between the eleventh and fifteenth centuries to a period of cooler temperatures between the seventeenth century and the middle of the nineteenth century. However, scientists studying the rapid rise in global temperatures during the late twentieth century say that natural variability cannot account for what is happening now." The new factor is the human factor-our vastly increased emissions of carbon dioxide and other greenhouse gases from the burning of fossil fuels such as coal and oil as well as from deforestation, large-scale cattle-grazing, agriculture, and industrialization.“Scientists refer to what has been happening in the earth’s atmosphere over the past century as the ‘enhanced greenhouse effect’”, notes the Pew study. By pumping man- made greenhouse gases into the atmosphere, humans are altering the process by which naturally occurring greenhouse gases, because of their unique molecular structure, trap the sun’s heat near the earth’s surface before that heat radiates back into space."The greenhouse effect keeps the earth warm and habitable; without it, the earth's surface would be about 60 degrees Fahrenheit colder on average. Since the average temperature of the earth is about 45 degrees Fahrenheit, the natural greenhouse effect is clearly a good thing. But the enhanced greenhouse effect means even more of the sun's heat is trapped, causing global temperatures to rise. Among the many scientific studies providing clear evidence that an enhanced greenhouse effect is under way was a 2005 report from NASA's Goddard Institute for Space Studies. Using satellites, data from buoys, and computer models to study the earth's oceans, scientists concluded that more energy is being absorbed from the sun than is emitted back to space, throwing the earth's energy out of balance and warming the globe."Which of the following statements is correct? (I) Greenhouse gases are responsible for global warming. They should be eliminated to save the planet (II) CO2 is the most dangerous of the greenhouse gases. Reduction in the release of CO2 would surely bring down the temperature (III) The greenhouse effect could be traced back to the industrial revolution. But the current development and the patterns of life have enhanced their emissions (IV) Deforestation has been one of the biggest factors contributing to the emission of greenhouse gases Choose the correct option:....
MCQ->The business consulting division of TCS has overseas operations in 3 locations: Singapore, New York and London. The Company has 22 analysts covering Singapore, 28 covering New York and 24 covering London. 6 analysts cover Singapore and New York but not London, 4 analysts cover Singapore and London but not New York, and 8 analysts cover New York and London but not Singapore. If TCS has a total of 42 business analysts covering at least one of the three locations: Singapore, New York and London, then the number of analysts covering New york alone is:....
MCQ-> Read the following passage carefully and answer the questions given at the end. The second issue I want to address is one that comes up frequently - that Indian banks should aim to become global. Most people who put forward this view have not thought through the costs and benefits analytically; they only see this as an aspiration consistent with India’s growing international profile. In its 1998 report, the Narasimham (II) Committee envisaged a three tier structure for the Indian banking sector: 3 or 4 large banks having an international presence on the top, 8-10 mid-sized banks, with a network of branches throughout the country and engaged in universal banking, in the middle, and local banks and regional rural banks operating in smaller regions forming the bottom layer. However, the Indian banking system has not consolidated in the manner envisioned by the Narasimham Committee. The current structure is that India has 81 scheduled commercial banks of which 26 are public sector banks, 21 are private sector banks and 34 are foreign banks. Even a quick review would reveal that there is no segmentation in the banking structure along the lines of Narasimham II.A natural sequel to this issue of the envisaged structure of the Indian banking system is the Reserve Bank’s position on bank consolidation. Our view on bank consolidation is that the process should be market-driven, based on profitability considerations and brought about through a process of mergers & amalgamations (M&As;). The initiative for this has to come from the boards of the banks concerned which have to make a decision based on a judgment of the synergies involved in the business models and the compatibility of the business cultures. The Reserve Bank’s role in the reorganisation of the banking system will normally be only that of a facilitator.lt should be noted though that bank consolidation through mergers is not always a totally benign option. On the positive side are a higher exposure threshold, international acceptance and recognition, improved risk management and improvement in financials due to economies of scale and scope. This can be achieved both through organic and inorganic growth. On the negative side, experience shows that consolidation would fail if there are no synergies in the business models and there is no compatibility in the business cultures and technology platforms of the merging banks.Having given that broad brush position on bank consolidation let me address two specific questions: (i) can Indian banks aspire to global size?; and (ii) should Indian banks aspire to global size? On the first question, as per the current global league tables based on the size of assets, our largest bank, the State Bank of India (SBI), together with its subsidiaries, comes in at No.74 followed by ICICI Bank at No. I45 and Bank of Baroda at 188. It is, therefore, unlikely that any of our banks will jump into the top ten of the global league even after reasonable consolidation.Then comes the next question of whether Indian banks should become global. Opinion on this is divided. Those who argue that we must go global contend that the issue is not so much the size of our banks in global rankings but of Indian banks having a strong enough, global presence. The main argument is that the increasing global size and influence of Indian corporates warrant a corresponding increase in the global footprint of Indian banks. The opposing view is that Indian banks should look inwards rather than outwards, focus their efforts on financial deepening at home rather than aspiring to global size.It is possible to take a middle path and argue that looking outwards towards increased global presence and looking inwards towards deeper financial penetration are not mutually exclusive; it should be possible to aim for both. With the onset of the global financial crisis, there has definitely been a pause to the rapid expansion overseas of our banks. Nevertheless, notwithstanding the risks involved, it will be opportune for some of our larger banks to be looking out for opportunities for consolidation both organically and inorganically. They should look out more actively in regions which hold out a promise of attractive acquisitions.The surmise, therefore, is that Indian banks should increase their global footprint opportunistically even if they do not get to the top of the league table.Identify the correct statement from the following:
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions