1. Which Consumer electronics and durables major recently announced the inauguration of its split air-conditioner manufacturing line at its Sriperumbudur facility?

Answer: Samsung.

Reply

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->Which Consumer electronics and durables major recently announced the inauguration of its split air-conditioner manufacturing line at its Sriperumbudur facility?....
QA->Which company recently decided to shut down its match box manufacturing unit in Chennai (i.e. Chennai plant of Wimco) and has given voluntary retirement to all its employees?....
QA->Who discovered Air Conditioner ?....
QA->Air Conditioner was invented by ?....
QA->Recently,PM Narendra Modi was invited by which country for the inauguration of SalmaDam?....
MCQ-> Read the following passages carefully and answer the questions given at the end of each passage.PASSAGE 1In a study of 150 emerging nations looking back fifty years, it was found that the single most powerful driver of economic booms was sustained growth in exports especially of manufactured products. Exporting simple manufactured goods not only increases income and consumption at home, it generates foreign revenues that allow the country to import the machinery and materials needed to improve its factories without running up huge foreign bills and debts. In short, in the case of manufacturing, one good investment leads to another. Once an economy starts down the manufacturing path, its momentum can carry it in the right direction for some time. When the ratio of investment to GDP surpasses 30 percent, it tends to stick at the level for almost nine years (on an average). The reason being that many of these nations seemed to show a strong leadership commitment to investment, particularly to investment in manufacturing. Today various international authorities have estimated that the emerging world need many trillions of dollars in investment on these kinds of transport and communication networks. The modern outlier is India where investment as a share of the economy exceeded 30 percent of GDP over the course of the 2000s, but little of that money went into factories. Indian manufacturing had been stagnant for decades at around 15 percent of GDP. The stagnation stems from the failures of the state to build functioning ports and power plants and to create an environment in which the rules governing labour, land and capital are designed and enforced in a way that encourages entrepreneurs to invest, particularly in factories. India has disappointed on both counts creating labour friendly rules and workable land acquisition norms. Between 1989 and 2010 India generated about ten million new jobs in manufacturing, but nearly all those jobs were created in enterprises that are small and informal and thus better suited to dodge India’s bureaucracy and its extremely restrictive rules regarding firing workers It is commonly said in India that the labour laws are so onerous that it is practically impossible to comply with even half of them without violating the other half.Informal shops, many of them one man operations, now account for 39 percent of India’s manufacturing workforce, up from 19 percent in 1989 and they are simply too small to compete in global markets. Harvard economist Dani Rodrik calls manufacturing the “automatic escalator” of development, because once a country finds a niche in global manufacturing, productivity often seems to start rising automatically. During its boom years India was growing in large part on the strength of investment in technology service industries, not manufacturing. This was put forward as a development strategy. Instead of growing richer by exporting even more advanced manufactured products, India could grow rich by exporting the services demanded in this new information age. These arguments began to gain traction early in the 2010s.In new research on the “service escalators”, a 2014 working paper from the World Bank made the case that the old growth escalator in manufacturing was already giving way to a new one in service industries. The report argued that while manufacturing is in retreat as a share of the global economy and is producing fewer jobs, services are still growing, contributing more to growth in output and jobs for nations rich and poor. However, one basic problem with the idea of service escalator is that in the emerging world most of the new service jobs are still in very traditional ventures. A decade on, India’s tech sector is still providing relatively simple IT services mainly in the same back office operations it started with and the number of new jobs it is creating is relatively small. In India, only about two million people work in IT services, or less than 1 percent of the workforce. So far the rise of these service industries has not been big enough to drive the mass modernisation of rural farm economies. People can move quickly from working in the fields to working on an assembly line, because both rely for the most part on manual labour. The leap from the farm to the modern service sector is much tougher since those jobs often require advanced skills. Workers who have moved into IT service jobs have generally come from a pool of relatively better educated members of the urban middle class, who speak English and have atleast some facility with computers. Finding jobs for the underemployed middle class is important but there are limits to how deeply it can transform the economy, because it is a relatively small part of the population. For now, the rule is still factories first, not service first.According to the information in the above passage, manufacturing in India has been stagnant because there is
 ...
MCQ-> The broad scientific understanding today is that our planet is experiencing a warming trend over and above natural and normal variations that is almost certainly due to human activities associated with large-scale manufacturing. The process began in the late 1700s with the Industrial Revolution, when manual labor, horsepower, and water power began to be replaced by or enhanced by machines. This revolution, over time, shifted Britain, Europe, and eventually North America from largely agricultural and trading societies to manufacturing ones, relying on machinery and engines rather than tools and animals.The Industrial Revolution was at heart a revolution in the use of energy and power. Its beginning is usually dated to the advent of the steam engine, which was based on the conversion of chemical energy in wood or coal to thermal energy and then to mechanical work primarily the powering of industrial machinery and steam locomotives. Coal eventually supplanted wood because, pound for pound, coal contains twice as much energy as wood (measured in BTUs, or British thermal units, per pound) and because its use helped to save what was left of the world's temperate forests. Coal was used to produce heat that went directly into industrial processes, including metallurgy, and to warm buildings, as well as to power steam engines. When crude oil came along in the mid- 1800s, still a couple of decades before electricity, it was burned, in the form of kerosene, in lamps to make light replacing whale oil. It was also used to provide heat for buildings and in manufacturing processes, and as a fuel for engines used in industry and propulsion.In short, one can say that the main forms in which humans need and use energy are for light, heat, mechanical work and motive power, and electricity which can be used to provide any of the other three, as well as to do things that none of those three can do, such as electronic communications and information processing. Since the Industrial Revolution, all these energy functions have been powered primarily, but not exclusively, by fossil fuels that emit carbon dioxide (CO2), To put it another way, the Industrial Revolution gave a whole new prominence to what Rochelle Lefkowitz, president of Pro-Media Communications and an energy buff, calls "fuels from hell" - coal, oil, and natural gas. All these fuels from hell come from underground, are exhaustible, and emit CO2 and other pollutants when they are burned for transportation, heating, and industrial use. These fuels are in contrast to what Lefkowitz calls "fuels from heaven" -wind, hydroelectric, tidal, biomass, and solar power. These all come from above ground, are endlessly renewable, and produce no harmful emissions.Meanwhile, industrialization promoted urbanization, and urbanization eventually gave birth to suburbanization. This trend, which was repeated across America, nurtured the development of the American car culture, the building of a national highway system, and a mushrooming of suburbs around American cities, which rewove the fabric of American life. Many other developed and developing countries followed the American model, with all its upsides and downsides. The result is that today we have suburbs and ribbons of highways that run in, out, and around not only America s major cities, but China's, India's, and South America's as well. And as these urban areas attract more people, the sprawl extends in every direction.All the coal, oil, and natural gas inputs for this new economic model seemed relatively cheap, relatively inexhaustible, and relatively harmless-or at least relatively easy to clean up afterward. So there wasn't much to stop the juggernaut of more people and more development and more concrete and more buildings and more cars and more coal, oil, and gas needed to build and power them. Summing it all up, Andy Karsner, the Department of Energy's assistant secretary for energy efficiency and renewable energy, once said to me: "We built a really inefficient environment with the greatest efficiency ever known to man."Beginning in the second half of the twentieth century, a scientific understanding began to emerge that an excessive accumulation of largely invisible pollutants-called greenhouse gases - was affecting the climate. The buildup of these greenhouse gases had been under way since the start of the Industrial Revolution in a place we could not see and in a form we could not touch or smell. These greenhouse gases, primarily carbon dioxide emitted from human industrial, residential, and transportation sources, were not piling up along roadsides or in rivers, in cans or empty bottles, but, rather, above our heads, in the earth's atmosphere. If the earth's atmosphere was like a blanket that helped to regulate the planet's temperature, the CO2 buildup was having the effect of thickening that blanket and making the globe warmer.Those bags of CO2 from our cars float up and stay in the atmosphere, along with bags of CO2 from power plants burning coal, oil, and gas, and bags of CO2 released from the burning and clearing of forests, which releases all the carbon stored in trees, plants, and soil. In fact, many people don't realize that deforestation in places like Indonesia and Brazil is responsible for more CO2 than all the world's cars, trucks, planes, ships, and trains combined - that is, about 20 percent of all global emissions. And when we're not tossing bags of carbon dioxide into the atmosphere, we're throwing up other greenhouse gases, like methane (CH4) released from rice farming, petroleum drilling, coal mining, animal defecation, solid waste landfill sites, and yes, even from cattle belching. Cattle belching? That's right-the striking thing about greenhouse gases is the diversity of sources that emit them. A herd of cattle belching can be worse than a highway full of Hummers. Livestock gas is very high in methane, which, like CO2, is colorless and odorless. And like CO2, methane is one of those greenhouse gases that, once released into the atmosphere, also absorb heat radiating from the earth's surface. "Molecule for molecule, methane's heat-trapping power in the atmosphere is twenty-one times stronger than carbon dioxide, the most abundant greenhouse gas.." reported Science World (January 21, 2002). “With 1.3 billion cows belching almost constantly around the world (100 million in the United States alone), it's no surprise that methane released by livestock is one of the chief global sources of the gas, according to the U.S. Environmental Protection Agency ... 'It's part of their normal digestion process,' says Tom Wirth of the EPA. 'When they chew their cud, they regurgitate [spit up] some food to rechew it, and all this gas comes out.' The average cow expels 600 liters of methane a day, climate researchers report." What is the precise scientific relationship between these expanded greenhouse gas emissions and global warming? Experts at the Pew Center on Climate Change offer a handy summary in their report "Climate Change 101. " Global average temperatures, notes the Pew study, "have experienced natural shifts throughout human history. For example; the climate of the Northern Hemisphere varied from a relatively warm period between the eleventh and fifteenth centuries to a period of cooler temperatures between the seventeenth century and the middle of the nineteenth century. However, scientists studying the rapid rise in global temperatures during the late twentieth century say that natural variability cannot account for what is happening now." The new factor is the human factor-our vastly increased emissions of carbon dioxide and other greenhouse gases from the burning of fossil fuels such as coal and oil as well as from deforestation, large-scale cattle-grazing, agriculture, and industrialization.“Scientists refer to what has been happening in the earth’s atmosphere over the past century as the ‘enhanced greenhouse effect’”, notes the Pew study. By pumping man- made greenhouse gases into the atmosphere, humans are altering the process by which naturally occurring greenhouse gases, because of their unique molecular structure, trap the sun’s heat near the earth’s surface before that heat radiates back into space."The greenhouse effect keeps the earth warm and habitable; without it, the earth's surface would be about 60 degrees Fahrenheit colder on average. Since the average temperature of the earth is about 45 degrees Fahrenheit, the natural greenhouse effect is clearly a good thing. But the enhanced greenhouse effect means even more of the sun's heat is trapped, causing global temperatures to rise. Among the many scientific studies providing clear evidence that an enhanced greenhouse effect is under way was a 2005 report from NASA's Goddard Institute for Space Studies. Using satellites, data from buoys, and computer models to study the earth's oceans, scientists concluded that more energy is being absorbed from the sun than is emitted back to space, throwing the earth's energy out of balance and warming the globe."Which of the following statements is correct? (I) Greenhouse gases are responsible for global warming. They should be eliminated to save the planet (II) CO2 is the most dangerous of the greenhouse gases. Reduction in the release of CO2 would surely bring down the temperature (III) The greenhouse effect could be traced back to the industrial revolution. But the current development and the patterns of life have enhanced their emissions (IV) Deforestation has been one of the biggest factors contributing to the emission of greenhouse gases Choose the correct option:...
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ...
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ...
MCQ-> One of the criteria by which we judge the vitality of a style of painting is its ability to renew itself- its responsiveness to the changing nature and quality of experience, the degree of conceptual and formal innovation that it exhibits. By this criterion, it would appear that the practice of abstractionism has failed to engage creatively with the radical change in human experience in recent decades. it has, seemingly, been unwilling to re-invent itself in relation to the systems of artistic expression and viewers’ expectations that have developed under the impact of the mass media. The judgement that abstractionism has slipped into ‘inertia gear’ is gaining endorsement, not only among discerning viewers and practitioners of other art forms, but also among abstract painters themselves. Like their companions elsewhere in the world, abstraction lists in India are asking themselves an overwhelming question today: Does abstractionism have a future? The major- crisis that abstractionists face is that of revitalising their picture surface; few have improvised any solutions beyond the ones that were exhausted by the I 970s. Like all revolutions, whether in politics or in art, abstractionism must now confront its moment of truth: having begun life as a new and radical pictorial approach to experience, it has become an entrenched orthodoxy itself. Indeed, when viewed against a historical situation in which a variety of subversive, interactive and richly hybrid forms are available to the art practitioner, abstractionism assumes the remote and defiant air of an aristocracy that has outlived its age; trammelled by formulaic conventions yet buttressed by a rhetoric of sacred mystery, it seems condemned to being the last citadel of the self-regarding ‘fine art’ tradition, the last hurrah of painting for painting’s sake. The situation is further complicated in India by the circumstances in which an indigenous abstractionism came into prominence here during the 1960s. From the beginning it was propelled by the dialectic between two motives, one revolutionary and the other conservative-it was inaugurated as an act of emancipation from the dogmas of the nascent Indian nation state, when an’ was officially viewed as an indulgence at worst, and at best, as an instrument for the celebration of the republic’s hopes and aspirations. Having rejected these dogmas, the pioneering abstractionists also went on to reject the various figurative styles associated with the Santiniketan circle and others. In such a situation, abstractionism was a revolutionary move, It led art towards the exploration of the s 3onsc)ous mind, the spiritual quest and the possible expansion of consciousness. Indian painting entered into a phase of self-inquiry, a meditative inner space where cosmic symbols and non-representational images ruled. Often, the transition from figurative idioms to abstractionist ones took place within the same artist. At the same time, Indian abstractionists have rarely committed themselves wholeheartedly to a nonrepresentational idiom. They have been preoccupied with the fundamentally metaphysical project of aspiring to the mystical- holy without altogether renouncing the symbolic) This has been sustained by a hereditary reluctance to give up the murti, the inviolable iconic form, which explains why abstractionism is marked by the conservative tendency to operate with images from the sacred repertoire of the past. Abstractionism thus entered India as a double-edged device in a complex cultural transaction. ideologically, it served as an internationalist legitimisation the emerging revolutionary local trends. However, on entry; it was conscripted to serve local artistic preoccupations a survey of indigenous abstractionism will show that its most obvious points of affinity with European and American abstract art were with the more mystically oriented of the major sources of abstractionist philosophy and practice, for instance the Kandinsky-Klee school. There have been no takers for Malevich’s Suprematism, which militantly rejected both the artistic forms of the past and the world of appearances, privileging the new- minted geometric symbol as an autonomous sign of the desire for infinity. Against this backdrop, we can identify three major abstractionist idioms in Indian art. The first develops from a love of the earth, and assumes the form of a celebration of the self’s dissolution in the cosmic panorama; the landscape is no longer a realistic, transcription of the scene, but is transformed into a visionary occasion for contemplating the cycles of decay and regeneration. The second idiom phrases its departures from symbolic and archetypal devices as invitations to heightened planes of awareness. Abstractionism begins with the establishment or dissolution of the motif, which can be drawn from diverse sources, including the hieroglyphic tablet, the Sufi meditation dance or the Tantrie diagram. The third- idiom is based on the lyric play of forms guided by gesture or allied with formal improvisations like the assemblage. Here, sometimes, the line dividing abstract image from patterned design or quasi-random expressive marking may blur. The flux of forms can also be regimented through the poetics of pure colour arrangements, vector-diagrammatic spaces anti gestural design. In this genealogy, some pure lines of descent follow their logic to the inevitable point of extinction, others engage in cross-fertilisation and yet others undergo mutation to maintain their energy. However, this genealogical survey demonstrates the wave at its crests, those points where the metaphysical and the painterly have been fused in images of abiding potency, ideas sensuously ordained rather than fabricated programmatically to a concept. It is equally possible to enumerate the troughs where the two principles do not come together, thus arriving at a very different account. Uncharitable as it may sound, the history of Indian abstractionism records a series of attempts to avoid the risks of abstraction by resorting to an overt and near-generic symbolism which many Indian abstractionists embrace when they find themselves bereft of the imaginative energy to negotiate the union of metaphysics and painterliness. Such symbolism falls into a dual trap: it succumbs to the pompous vacuity of pure metaphysics when the burden of intention is passed off as justification; or then it is desiccated by the arid formalism of pure painterliness, with delight in the measure of chance or pattern guiding the execution of a painting. The ensuing conflict of purpose stalls the progress of abstractionism in an impasse. The remarkable Indian abstractionists are precisely those who have overcome this and addressed themselves to the basic elements of their art with a decisive sense of independence from prior models. In their recent work, we see the logic of Indian abstractionism pushed almost to the furthest it can be taken. Beyond such artists stands a lost generation of abstractionists whose work invokes a wistful, delicate beauty but stops there. Abstractionism is not a universal language; it is an art that points up the loss of a shared language of signs in society. And yet, it affirms the possibility of its recovery through the effort of awareness. While its rhetoric has always emphasised a call for new forms of attention, abstractionist practice has tended to fall into a complacent pride in its own incomprehensibility; a complacency fatal in an ethos where vibrant new idioms compete for the viewers’ attention. Indian abstractionists ought to really return to basics, to reformulate and replenish their understanding of the nature of the relationship between the painted image and the world around it. But will they abandon their favourite conceptual habits and formal conventions, if this becomes necessary?Which one of the following is not stated by the author as a reason for abstractionism losing its vitality?
 ...
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions