1. The elevator E and its freight have a total mass of 400 kg. Hoisting is provided by the motor M and the 60-kg block C. If the motor has an efficiency of e = 0.6, determine the power that must be supplied to the motor when the elevator is hoisted upward at a constant speed of vE = m/s.





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->A car during its journey travels 30 minutes at the speed of 40 km/hr. another 45 minutes at the speed of 60 km /hr and for two hours at a speed of 70 km/hr. Find the average speed of the car?....
QA->Which remains constant while throwing a ball upward?....
QA->For a body moving with constant speed in a horizontal circle; what remains constant?....
QA->For a body moving with constant speed in a horizontal circle, what remains constant?....
QA->A computer has 8 MB in main memory, 128 KB cache with block size of 4KB. If direct mapping scheme is used, how many different main memory blocks can map into a given physical cache block?....
MCQ->The elevator E and its freight have a total mass of 400 kg. Hoisting is provided by the motor M and the 60-kg block C. If the motor has an efficiency of e = 0.6, determine the power that must be supplied to the motor when the elevator is hoisted upward at a constant speed of vE = m/s.....
MCQ-> I suggest that the essential character of the Trade Cycle and, especially, the regularity of time-sequence and of duration which justifies us in calling it a cycle, is mainly due to the way in which the marginal efficiency of capital fluctuates. The Trade Cycle is best regarded, I think, as being occasioned by a cyclical change in the marginal efficiency of capital, though complicated and often aggravated by associated changes in the other significant short period variables of the economic system.By a cyclical movement we mean that as the system progresses in, e.g. the upward direction, the forces propelling it upwards at first gather force and have a cumulative effect on one another but gradually lose their strength until at a certain point they tend to be replaced by forces operating in the opposite direction; which in turn gather force for a time and accentuate one another, until they too, having reached their maximum development, wane and give place to their opposite. We do not, however, merely mean by a cyclical movement that upward and downward tendencies, once started, do not persist for ever in the same direction but are ultimately reversed. We mean also that there is some recognizable degree of regularity in the time-sequence and duration of the upward and downward movements. There is, however, another characteristic of what we call the Trade Cycle which our explanation must cover if it is to be adequate; namely, the phenomenon of the ‘crisis’ the fact that the substitution of a downward for an upward tendency often takes place suddenly and violently, whereas there is, as a rule, no such sharp turning-point when an upward is substituted for a downward tendency. Any fluctuation in investment not offset by a corresponding change in the propensity to consume will, of course, result in a fluctuation in employment. Since, therefore, the volume of investment is subject to highly complex influences, it is highly improbable that all fluctuations either in investment itself or in the marginal efficiency of capital will be of a cyclical character.We have seen above that the marginal efficiency of capital depends, not only on the existing abundance or scarcity of capital-goods and the current cost of production of capital- goods, but also on current expectations as to the future yield of capital-goods. In the case of durable assets it is, therefore, natural and reasonable that expectations of the future should play a dominant part in determining the scale on which new investment is deemed advisable. But, as we have seen, the basis for such expectations is very precarious. Being based on shifting and unreliable evidence, they are subject to sudden and violent changes. Now, we have been accustomed in explaining the ‘crisis’ to lay stress on the rising tendency of the rate of interest under the influence of the increased demand for money both for trade and speculative purposes. At times this factor may certainly play an aggravating and, occasionally perhaps, an initiating part. But I suggest that a more typical, and often the predominant, explanation of the crisis is, not primarily a rise in the rate of interest, but a sudden collapse in the marginal efficiency of capital. The later stages of the boom are characterized by optimistic expectations as to the future yield of capital goods sufficiently strong to offset their growing abundance and their rising costs of production and, probably, a rise in the rate of interest also. It is of the nature of organized investment markets, under the influence of purchasers largely ignorant of what they are buying and of speculators who are more concerned with forecasting the next shift of market sentiment than with a reasonable estimate of the future yield of capital-assets, that, when disillusion falls upon an over-optimistic and over- bought market, it should fall with sudden and even catastrophic force. Moreover, the dismay and uncertainty as to the future which accompanies a collapse in the marginal efficiency of capital naturally precipitates a sharp increase in liquidity-preference and hence a rise in the rate of interest. Thus the fact that a collapse in the marginal efficiency of capital tends to be associated with a rise in the rate of interest may seriously aggravate the decline in investment. But the essence of the situation is to be found, nevertheless, in the collapse in the marginal efficiency of capital, particularly in the case of those types of capital which have been contributing most to the previous phase of heavy new investment. Liquidity preference, except those manifestations of it which are associated with increasing trade and speculation, does not increase until after the collapse in the marginal efficiency of capital. It is this, indeed, which renders the slump so intractable. Which of the following does not describe the features of cyclical movement?
 ....
MCQ-> Before the internet, one of the most rapid changes to the global economy and trade was wrought by something so blatantly useful that it is hard to imagine a struggle to get it adopted: the shipping container. In the early 1960s, before the standard container became ubiquitous, freight costs were I0 per cent of the value of US imports, about the same barrier to trade as the average official government import tariff. Yet in a journey that went halfway round the world, half of those costs could be incurred in two ten-mile movements through the ports at either end. The predominant ‘break-bulk’ method, where each shipment was individually split up into loads that could be handled by a team of dockers, was vastly complex and labour-intensive. Ships could take weeks or months to load, as a huge variety of cargoes of different weights, shapes and sizes had to be stacked together by hand. Indeed, one of the most unreliable aspects of such a labour-intensive process was the labour. Ports, like mines, were frequently seething pits of industrial unrest. Irregular work on one side combined with what was often a tight-knit, well - organized labour community on the other.In 1956, loading break-bulk cargo cost $5.83 per ton. The entrepreneurial genius who saw the possibilities for standardized container shipping, Malcolm McLean, floated his first containerized ship in that year and claimed to be able to shift cargo for 15.8 cents a ton. Boxes of the same size that could be loaded by crane and neatly stacked were much faster to load. Moreover, carrying cargo in a standard container would allow it to be shifted between truck, train and ship without having to be repacked each time.But between McLean’s container and the standardization of the global market were an array of formidable obstacles. They began at home in the US with the official Interstate Commerce Commission, which could prevent price competition by setting rates for freight haulage by route and commodity, and the powerful International Longshoremen's Association (ILA) labour union. More broadly, the biggest hurdle was achieving what economists call ‘network effects’: the benefit of a standard technology rises exponentially as more people use it. To dominate world trade, containers had to be easily interchangeable between different shipping lines, ports, trucks and railcars. And to maximize efficiency, they all needed to be the same size. The adoption of a network technology often involves overcoming the resistance of those who are heavily invested in the old system. And while the efficiency gains are clear to see, there are very obvious losers as well as winners. For containerization, perhaps the most spectacular example was the demise of New York City as a port.In the early I950s, New York handled a third of US seaborne trade in manufactured goods. But it was woefully inefficient, even with existing break-bulk technology: 283 piers, 98 of which were able to handle ocean-going ships, jutted out into the river from Brooklyn and Manhattan. Trucks bound‘ for the docks had to fiive through the crowded, narrow streets of Manhattan, wait for an hour or two before even entering a pier, and then undergo a laborious two-stage process in which the goods foot were fithr unloaded into a transit shed and then loaded onto a ship. ‘Public loader’ work gangs held exclusive rights to load and unload on a particular pier, a power in effect granted by the ILA, which enforced its monopoly with sabotage and violence against than competitors. The ILA fought ferociously against containerization, correctly foreseeing that it would destroy their privileged position as bandits controlling the mountain pass. On this occasion, bypassing them simply involved going across the river. A container port was built in New Jersey, where a 1500-foot wharf allowed ships to dock parallel to shore and containers to be lified on and off by crane. Between 1963 - 4 and 1975 - 6, the number of days worked by longshoremen in Manhattan went from 1.4 million to 127,041.Containers rapidly captured the transatlantic market, and then the growing trade with Asia. The effect of containerization is hard to see immediately in freight rates, since the oil price hikes of the 1970s kept them high, but the speed with which shippers adopted; containerization made it clear it brought big benefits of efficiency and cost. The extraordinary growth of the Asian tiger economies of Singapore, Taiwan, Korea and Hong Kong, which based their development strategy on exports, was greatly helped by the container trade that quickly built up between the US and east Asia. Ocean-borne exports from South Korea were 2.9 million tons in 1969 and 6 million in 1973, and its exports to the US tripled.But the new technology did not get adopted all on its own. It needed a couple of pushes from government - both, as it happens, largely to do with the military. As far as the ships were concerned, the same link between the merchant and military navy that had inspired the Navigation Acts in seventeenth-century England endured into twentieth-century America. The government's first helping hand was to give a spur to the system by adopting it to transport military cargo. The US armed forces, seeing the efficiency of the system, started contracting McLean’s company Pan-Atlantic, later renamed Sea-land, to carry equipment to the quarter of a million American soldiers stationed in Western Europe. One of the few benefits of America's misadventure in Vietnam was a rapid expansion of containerization. Because war involves massive movements of men and material, it is often armies that pioneer new techniques in supply chains.The government’s other role was in banging heads together sufficiently to get all companies to accept the same size container. Standard sizes were essential to deliver the economies of scale that came from interchangeability - which, as far as the military was concerned, was vital if the ships had to be commandeered in case war broke out. This was a significant problem to overcome, not least because all the companies that had started using the container had settled on different sizes. Pan- Atlantic used 35- foot containers, because that was the maximum size allowed on the highways in its home base in New Jersey. Another of the big shipping companies, Matson Navigation, used a 24-foot container since its biggest trade was in canned pineapple from Hawaii, and a container bigger than that would have been too heavy for a crane to lift. Grace Line, which largely traded with Latin America, used a foot container that was easier to truck around winding mountain roads.Establishing a US standard and then getting it adopted internationally took more than a decade. Indeed, not only did the US Maritime Administration have to mediate in these rivalries but also to fight its own turf battles with the American Standards Association, an agency set up by the private sector. The matter was settled by using the power of federal money: the Federal Maritime Board (FMB), which handed out to public subsidies for shipbuilding, decreed that only the 8 x 8-foot containers in the lengths of l0, 20, 30 or 40 feet would be eligible for handouts.Identify the correct statement:
 ....
MCQ-> Read carefully the four passages that follow and answer the questions given at the end of each passage:PASSAGE I The most important task is revitalizing the institution of independent directors. The independent directors of a company should be faithful fiduciaries protecting, the long-term interests of shareholders while ensuring fairness to employees, investor, customer, regulators, the government of the land and society. Unfortunately, very often, directors are chosen based of friendship and, sadly, pliability. Today, unfortunately, in the majority of cases, independence is only true on paper.The need of the hour is to strengthen the independence of the board. We have to put in place stringent standards for the independence of directors. The board should adopt global standards for director-independence, and should disclose how each independent director meets these standards. It is desirable to have a comprehensive report showing the names of the company employees of fellow board members who are related to each director on the board. This report should accompany the annual report of all listed companies. Another important step is to regularly assess the board members for performance. The assessment should focus on issues like competence, preparation, participation and contribution. Ideally, this evaluation should be performed by a third party. Underperforming directors should be allowed to leave at the end of their term in a gentle manner so that they do not lose face. Rather than being the rubber stamp of a company’s management policies, the board should become a true active partner of the management. For this, independent directors should be trained in their in their in roles and responsibilities. Independent directors should be trained on the business model and risk model of the company, on the governance practices, and the responsibilities of various committees of the board of the company. The board members should interact frequently with executives to understand operational issues. As part of the board meeting agenda, the independent directors should have a meeting among themselves without the management being present. The independent board members should periodically review the performance of the company’s CEO, the internal directors and the senior management. This has to be based on clearly defined objective criteria, and these criteria should be known to the CEO and other executive directors well before the start of the evolution period. Moreover, there should be a clearly laid down procedure for communicating the board’s review to the CEO and his/her team of executive directors. Managerial remuneration should be based on such reviews. Additionally, senior management compensation should be determined by the board in a manner that is fair to all stakeholders. We have to look at three important criteria in deciding managerial remuneration-fairness accountability and transparency. Fairness of compensation is determined by how employees and investors react to the compensation of the CEO. Accountability is enhanced by splitting the total compensation into a small fixed component and a large variable component. In other words, the CEO, other executive directors and the senior management should rise or fall with the fortunes of the company. The variable component should be linked to achieving the long-term objectives of the firm. Senior management compensation should be reviewed by the compensation committee of the board consisting of only the independent directors. This should be approved by the shareholders. It is important that no member of the internal management has a say in the compensation of the CEO, the internal board members or the senior management. The SEBI regulations and the CII code of conduct have been very helpful in enhancing the level of accountability of independent directors. The independent directors should decide voluntarily how they want to contribute to the company. Their performance should decide voluntarily how they want to contribute to the company. Their performance should be appraised through a peer evaluation process. Ideally, the compensation committee should decide on the compensation of each independent director based on such a performance appraisal. Auditing is another major area that needs reforms for effective corporate governance. An audit is the Independent examination of financial transactions of any entity to provide assurance to shareholder and other stakeholders that the financial statements are free of material misstatement. Auditors are qualified professionals appointed by the shareholders to report on the reliability of financial statements prepared by the management. Financial markets look to the auditor’s report for an independent opinion on the financial and risk situation of a company. We have to separate such auditing form other services. For a truly independent opinion, the auditing firm should not provide services that are perceived to be materially in conflict with the role of the auditor. These include investigations, consulting advice, sub contraction of operational activities normally undertaken by the management, due diligence on potential acquisitions or investments, advice on deal structuring, designing/implementing IT systems, bookkeeping, valuations and executive recruitment. Any departure from this practice should be approved by the audit committee in advance. Further, information on any such exceptions must be disclosed in the company’s quarterly and annual reports. To ensure the integrity of the audit team, it is desirable to rotate auditor partners. The lead audit partner and the audit partner responsible for reviewing a company’s audit must be rotated at least once every three to five years. This eliminates the possibility of the lead auditor and the company management getting into the kind of close, cozy relationship that results in lower objectivity in audit opinions. Further, a registered auditor should not audit a chief accounting office was associated with the auditing firm. It is best that members of the audit teams are prohibited from taking up employment in the audited corporations for at least a year after they have stopped being members of the audit team.A competent audit committee is essential to effectively oversee the financial accounting and reporting process. Hence, each member of the audit committee must be ‘financially literate’, further, at least one member of the audit committee, preferably the chairman, should be a financial expert-a person who has an understanding of financial statements and accounting rules, and has experience in auditing. The audit committee should establish procedures for the treatment of complaints received through anonymous submission by employees and whistleblowers. These complaints may be regarding questionable accounting or auditing issues, any harassment to an employee or any unethical practice in the company. The whistleblowers must be protected. Any related-party transaction should require prior approval by the audit committee, the full board and the shareholders if it is material. Related parties are those that are able to control or exercise significant influence. These include; parent- subsidiary relationships; entities under common control; individuals who, through ownership, have significant influence over the enterprise and close members of their families; and dey management personnel.Accounting standards provide a framework for preparation and presentation of financial statements and assist auditors in forming an opinion on the financial statements. However, today, accounting standards are issued by bodies comprising primarily of accountants. Therefore, accounting standards do not always keep pace with changes in the business environment. Hence, the accounting standards-setting body should include members drawn from the industry, the profession and regulatory bodies. This body should be independently funded. Currently, an independent oversight of the accounting profession does not exist. Hence, an independent body should be constituted to oversee the functioning of auditors for Independence, the quality of audit and professional competence. This body should comprise a "majority of non- practicing accountants to ensure independent oversight. To avoid any bias, the chairman of this body should not have practiced as an accountant during the preceding five years. Auditors of all public companies must register with this body. It should enforce compliance with the laws by auditors and should mandate that auditors must maintain audit working papers for at least seven years.To ensure the materiality of information, the CEO and CFO of the company should certify annual and quarterly reports. They should certify that the information in the reports fairly presents the financial condition and results of operations of the company, and that all material facts have been disclosed. Further, CEOs and CFOs should certify that they have established internal controls to ensure that all information relating to the operations of the company is freely available to the auditors and the audit committee. They should also certify that they have evaluated the effectiveness of these controls within ninety days prior to the report. False certifications by the CEO and CFO should be subject to significant criminal penalties (fines and imprisonment, if willful and knowing). If a company is required to restate its reports due to material non-compliance with the laws, the CEO and CFO must face severe punishment including loss of job and forfeiting bonuses or equity-based compensation received during the twelve months following the filing.The problem with the independent directors has been that: I. Their selection has been based upon their compatibility with the company management II. There has been lack of proper training and development to improve their skill set III. Their independent views have often come in conflict with the views of company management. This has hindered the company’s decision-making process IV. Stringent standards for independent directors have been lacking....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions