1. Why does the data communication industry use the layered OSI reference model? It divides the network communication process into smaller and simpler components, thus aiding component development, design, and troubleshooting. It enables equipment from different vendors to use the same electronic components, thus saving research and development funds. It supports the evolution of multiple competing standards and thus provides business opportunities for equipment manufacturers. It encourages industry standardization by defining what functions occur at each layer of the model.





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->What layer of OSI does the encryption/decryption?....
QA->The process of splitting of data into equal size partition over multiple disks:....
QA->In Data Flow Diagrams(DFD), the procedure of expanding a process in the DFD into a set of equivalent smaller processes is called:....
QA->Thevenue of Maritime India Summit 2016 the maiden flagship initiative of Ministryof Shipping, Govt of India to provide a unique global platform for investors toexplore potential business opportunities in the Indian Maritime Sector....
QA->Prime Minister Narendra Modi launched a new scheme on April 27, 2017 to boost air travel between smaller cities by making flights far more affordable for "tier 2" and smaller towns. What is the name of that scheme?....
MCQ->Why does the data communication industry use the layered OSI reference model? It divides the network communication process into smaller and simpler components, thus aiding component development, design, and troubleshooting. It enables equipment from different vendors to use the same electronic components, thus saving research and development funds. It supports the evolution of multiple competing standards and thus provides business opportunities for equipment manufacturers. It encourages industry standardization by defining what functions occur at each layer of the model.....
MCQ-> The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and policies for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations (MNCs). This debate has been stimulated by the international agreement on Trade Related Intellectual Property Rights (TRIPs), negotiated as part of the Uruguay Round. TRIPs, for the first time, seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations, and that this calls for a system of patents which gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorised copying or use. With strong support of their national governments, they were influential in shaping the agreement on TRIPs, which eventually emerged from the Uruguay Round. The current debate on TRIPs in India - as indeed elsewhere - echoes wider concerns about ‘privatisation’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism, argue that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private sector research are apprehensive that this will work to the disadvantage of farmers by making them more and more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities. Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. in our own experience, some of the early high yielding varieties (HYVs) of rice and wheat were found susceptible to widespread pest attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common pool resource management, maintaining ecological health and ensuring sustainability is both critical and also demanding in terms of technological challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment and other inputs produced by them. Knowledge and techniques for resource management are not ‘marketable’ in the same way as those inputs. Their application to land, water and forests has a long gestation and their efficacy depends on resolving difficult problems such as designing institutions for proper and equitable management of common pool resources. Public or quasi-public research institutions informed by broader, long-term concerns can only do such work. The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatisation of research. We need to address problems likely to arise on account of the public-private sector complementarity, and ensure that the public research system performs efficiently. Complementarity between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organisations). Moreover, as is increasingly recognised, accumulated stock of knowledge does not reside only in the scientific community and its academic publications, but is also widely diffused in traditions and folk knowledge of local communities all over. The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is a ‘public good’ freely accessible in the public domain and usable free of any charge. Various techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plants species (neem and turmeric are by now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protection for them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded organisations? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensation be for individuals or (or communities/institutions to which they belong? Should individual institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need serious detailed study to evolve reasonably satisfactory, fair and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co- operatives, universities, public trusts and a variety of non-governmental organisations (NGOs). Giving greater autonomy to research organisations from government control and giving non- government public institutions the space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.Which one of the following statements describes an important issue, or important issues, not being raised in the context of the current debate on IPRs?
 ....
MCQ-> In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower, but have the advantage that they are non-volatile, so that they can be used to store software and documents even when the power is off.In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronics. These magnetic memories would be nonvolatile; but they would also he faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music- players; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them looks solid, there are tricky practical problems and need to be overcome.Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.c), exploits the fact that the electrical resistance of some materials changes in the presence of magnetic field— a phenomenon known as magneto- resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetised spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the spots can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive. Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips, rather spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor-reservoir of electrical charge that is either empty or fill -to represent a zero or a one. In the NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetisable material. A matrix of wires passing above and below the elements allows each to be magnetised, either clockwise or anti-clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is non-volatile memory. Unlike the elements of an electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialise their device through a company called Non-V olatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll off the production line.Most attention in the field in focused on an alternative approach based on magnetic tunnel-junctions (MTJs), which are being investigated by researchers at chipmakers such as IBM, Motorola, Siemens and Hewlett-Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetisable material separated by a barrier of aluminium oxide just four or five atoms thick. The polarisation of lower magnetisable layer is fixed in one direction, but that of the upper layer can be set (again, by passing a current through a matrix of control wires) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then either the same or opposite directions.Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunnelling. It turns out that such tunnelling is easier when the two magnetic layers are polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.To build a full-scale memory chip based on MTJs is, however, no easy matter. According to Paulo Freitas, an expert on chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when the appropriate wires in the control matrix are switched on, but not so sensitive that they respond when a neighbouring elements is changed. Despite these difficulties, the general consensus is that MTJs are the more promising ideas. Dr. Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. Dr. Prinz, however, contends that his plan will eventually offer higher storage densities and lower production costs.Not content with shaking up the multi-billion-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Well and of Cambridge University outlined research that could form the basis of a magnetic microprocessor — a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowbum and Dr. Welland have demonstrated how a logic gate (the basic element of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.It is, admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn, who is now searching for backers to help commercialise the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip, is the next logical step. Dr. Prinz says that once magnetic memory is sorted out “the target is to go after the logic circuits.” Whether all-magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch — such as optical, biological and quantum computing — remains to be seen. Dr. Cowburn suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on:
 ....
MCQ-> Read carefully the four passages that follow and answer the questions given at the end of each passage:PASSAGE I The most important task is revitalizing the institution of independent directors. The independent directors of a company should be faithful fiduciaries protecting, the long-term interests of shareholders while ensuring fairness to employees, investor, customer, regulators, the government of the land and society. Unfortunately, very often, directors are chosen based of friendship and, sadly, pliability. Today, unfortunately, in the majority of cases, independence is only true on paper.The need of the hour is to strengthen the independence of the board. We have to put in place stringent standards for the independence of directors. The board should adopt global standards for director-independence, and should disclose how each independent director meets these standards. It is desirable to have a comprehensive report showing the names of the company employees of fellow board members who are related to each director on the board. This report should accompany the annual report of all listed companies. Another important step is to regularly assess the board members for performance. The assessment should focus on issues like competence, preparation, participation and contribution. Ideally, this evaluation should be performed by a third party. Underperforming directors should be allowed to leave at the end of their term in a gentle manner so that they do not lose face. Rather than being the rubber stamp of a company’s management policies, the board should become a true active partner of the management. For this, independent directors should be trained in their in their in roles and responsibilities. Independent directors should be trained on the business model and risk model of the company, on the governance practices, and the responsibilities of various committees of the board of the company. The board members should interact frequently with executives to understand operational issues. As part of the board meeting agenda, the independent directors should have a meeting among themselves without the management being present. The independent board members should periodically review the performance of the company’s CEO, the internal directors and the senior management. This has to be based on clearly defined objective criteria, and these criteria should be known to the CEO and other executive directors well before the start of the evolution period. Moreover, there should be a clearly laid down procedure for communicating the board’s review to the CEO and his/her team of executive directors. Managerial remuneration should be based on such reviews. Additionally, senior management compensation should be determined by the board in a manner that is fair to all stakeholders. We have to look at three important criteria in deciding managerial remuneration-fairness accountability and transparency. Fairness of compensation is determined by how employees and investors react to the compensation of the CEO. Accountability is enhanced by splitting the total compensation into a small fixed component and a large variable component. In other words, the CEO, other executive directors and the senior management should rise or fall with the fortunes of the company. The variable component should be linked to achieving the long-term objectives of the firm. Senior management compensation should be reviewed by the compensation committee of the board consisting of only the independent directors. This should be approved by the shareholders. It is important that no member of the internal management has a say in the compensation of the CEO, the internal board members or the senior management. The SEBI regulations and the CII code of conduct have been very helpful in enhancing the level of accountability of independent directors. The independent directors should decide voluntarily how they want to contribute to the company. Their performance should decide voluntarily how they want to contribute to the company. Their performance should be appraised through a peer evaluation process. Ideally, the compensation committee should decide on the compensation of each independent director based on such a performance appraisal. Auditing is another major area that needs reforms for effective corporate governance. An audit is the Independent examination of financial transactions of any entity to provide assurance to shareholder and other stakeholders that the financial statements are free of material misstatement. Auditors are qualified professionals appointed by the shareholders to report on the reliability of financial statements prepared by the management. Financial markets look to the auditor’s report for an independent opinion on the financial and risk situation of a company. We have to separate such auditing form other services. For a truly independent opinion, the auditing firm should not provide services that are perceived to be materially in conflict with the role of the auditor. These include investigations, consulting advice, sub contraction of operational activities normally undertaken by the management, due diligence on potential acquisitions or investments, advice on deal structuring, designing/implementing IT systems, bookkeeping, valuations and executive recruitment. Any departure from this practice should be approved by the audit committee in advance. Further, information on any such exceptions must be disclosed in the company’s quarterly and annual reports. To ensure the integrity of the audit team, it is desirable to rotate auditor partners. The lead audit partner and the audit partner responsible for reviewing a company’s audit must be rotated at least once every three to five years. This eliminates the possibility of the lead auditor and the company management getting into the kind of close, cozy relationship that results in lower objectivity in audit opinions. Further, a registered auditor should not audit a chief accounting office was associated with the auditing firm. It is best that members of the audit teams are prohibited from taking up employment in the audited corporations for at least a year after they have stopped being members of the audit team.A competent audit committee is essential to effectively oversee the financial accounting and reporting process. Hence, each member of the audit committee must be ‘financially literate’, further, at least one member of the audit committee, preferably the chairman, should be a financial expert-a person who has an understanding of financial statements and accounting rules, and has experience in auditing. The audit committee should establish procedures for the treatment of complaints received through anonymous submission by employees and whistleblowers. These complaints may be regarding questionable accounting or auditing issues, any harassment to an employee or any unethical practice in the company. The whistleblowers must be protected. Any related-party transaction should require prior approval by the audit committee, the full board and the shareholders if it is material. Related parties are those that are able to control or exercise significant influence. These include; parent- subsidiary relationships; entities under common control; individuals who, through ownership, have significant influence over the enterprise and close members of their families; and dey management personnel.Accounting standards provide a framework for preparation and presentation of financial statements and assist auditors in forming an opinion on the financial statements. However, today, accounting standards are issued by bodies comprising primarily of accountants. Therefore, accounting standards do not always keep pace with changes in the business environment. Hence, the accounting standards-setting body should include members drawn from the industry, the profession and regulatory bodies. This body should be independently funded. Currently, an independent oversight of the accounting profession does not exist. Hence, an independent body should be constituted to oversee the functioning of auditors for Independence, the quality of audit and professional competence. This body should comprise a "majority of non- practicing accountants to ensure independent oversight. To avoid any bias, the chairman of this body should not have practiced as an accountant during the preceding five years. Auditors of all public companies must register with this body. It should enforce compliance with the laws by auditors and should mandate that auditors must maintain audit working papers for at least seven years.To ensure the materiality of information, the CEO and CFO of the company should certify annual and quarterly reports. They should certify that the information in the reports fairly presents the financial condition and results of operations of the company, and that all material facts have been disclosed. Further, CEOs and CFOs should certify that they have established internal controls to ensure that all information relating to the operations of the company is freely available to the auditors and the audit committee. They should also certify that they have evaluated the effectiveness of these controls within ninety days prior to the report. False certifications by the CEO and CFO should be subject to significant criminal penalties (fines and imprisonment, if willful and knowing). If a company is required to restate its reports due to material non-compliance with the laws, the CEO and CFO must face severe punishment including loss of job and forfeiting bonuses or equity-based compensation received during the twelve months following the filing.The problem with the independent directors has been that: I. Their selection has been based upon their compatibility with the company management II. There has been lack of proper training and development to improve their skill set III. Their independent views have often come in conflict with the views of company management. This has hindered the company’s decision-making process IV. Stringent standards for independent directors have been lacking....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions