1. The design concept of using building blocks of circuits in a PLD program is called a(n):





Write Comment

Type in
(Press Ctrl+g to toggle between English and the chosen language)

Comments

Tags
Show Similar Question And Answers
QA->Which is electrical circuits used to get smooth de output from a rectified circuit called?....
QA->What can be considered as basic building blocks of a digital circuit?....
QA->In program design, if the number of conditions in a decision table is ‘n’, the maximum number of rules(columns) possible is :....
QA->Stored Program Concept postulated by:....
QA->What is social welfare program "Integrated Rural Development Program (IRDP) -1978" intented for ?....
MCQ->The design concept of using building blocks of circuits in a PLD program is called a(n):....
MCQ-> Read the passage carefully and answer the questions given at the end of each passage:Turning the business involved more than segmenting and pulling out of retail. It also meant maximizing every strength we had in order to boost our profit margins. In re-examining the direct model, we realized that inventory management was not just core strength; it could be an incredible opportunity for us, and one that had not yet been discovered by any of our competitors. In Version 1.0 the direct model, we eliminated the reseller, thereby eliminating the mark-up and the cost of maintaining a store. In Version 1.1, we went one step further to reduce inventory inefficiencies. Traditionally, a long chain of partners was involved in getting a product to the customer. Let’s say you have a factory building a PC we’ll call model #4000. The system is then sent to the distributor, which sends it to the warehouse, which sends it to the dealer, who eventually pushes it on to the consumer by advertising, “I’ve got model #4000. Come and buy it.” If the consumer says, “But I want model #8000,” the dealer replies, “Sorry, I only have model #4000.” Meanwhile, the factory keeps building model #4000s and pushing the inventory into the channel. The result is a glut of model #4000s that nobody wants. Inevitably, someone ends up with too much inventory, and you see big price corrections. The retailer can’t sell it at the suggested retail price, so the manufacturer loses money on price protection (a practice common in our industry of compensating dealers for reductions in suggested selling price). Companies with long, multi-step distribution systems will often fill their distribution channels with products in an attempt to clear out older targets. This dangerous and inefficient practice is called “channel stuffing”. Worst of all, the customer ends up paying for it by purchasing systems that are already out of date Because we were building directly to fill our customers’ orders, we didn’t have finished goods inventory devaluing on a daily basis. Because we aligned our suppliers to deliver components as we used them, we were able to minimize raw material inventory. Reductions in component costs could be passed on to our customers quickly, which made them happier and improved our competitive advantage. It also allowed us to deliver the latest technology to our customers faster than our competitors. The direct model turns conventional manufacturing inside out. Conventional manufacturing, because your plant can’t keep going. But if you don’t know what you need to build because of dramatic changes in demand, you run the risk of ending up with terrific amounts of excess and obsolete inventory. That is not the goal. The concept behind the direct model has nothing to do with stockpiling and everything to do with information. The quality of your information is inversely proportional to the amount of assets required, in this case excess inventory. With less information about customer needs, you need massive amounts of inventory. So, if you have great information – that is, you know exactly what people want and how much - you need that much less inventory. Less inventory, of course, corresponds to less inventory depreciation. In the computer industry, component prices are always falling as suppliers introduce faster chips, bigger disk drives and modems with ever-greater bandwidth. Let’s say that Dell has six days of inventory. Compare that to an indirect competitor who has twenty-five days of inventory with another thirty in their distribution channel. That’s a difference of forty-nine days, and in forty-nine days, the cost of materials will decline about 6 percent. Then there’s the threat of getting stuck with obsolete inventory if you’re caught in a transition to a next- generation product, as we were with those memory chip in 1989. As the product approaches the end of its life, the manufacturer has to worry about whether it has too much in the channel and whether a competitor will dump products, destroying profit margins for everyone. This is a perpetual problem in the computer industry, but with the direct model, we have virtually eliminated it. We know when our customers are ready to move on technologically, and we can get out of the market before its most precarious time. We don’t have to subsidize our losses by charging higher prices for other products. And ultimately, our customer wins. Optimal inventory management really starts with the design process. You want to design the product so that the entire product supply chain, as well as the manufacturing process, is oriented not just for speed but for what we call velocity. Speed means being fast in the first place. Velocity means squeezing time out of every step in the process. Inventory velocity has become a passion for us. To achieve maximum velocity, you have to design your products in a way that covers the largest part of the market with the fewest number of parts. For example, you don’t need nine different disk drives when you can serve 98 percent of the market with only four. We also learned to take into account the variability of the lost cost and high cost components. Systems were reconfigured to allow for a greater variety of low-cost parts and a limited variety of expensive parts. The goal was to decrease the number of components to manage, which increased the velocity, which decreased the risk of inventory depreciation, which increased the overall health of our business system. We were also able to reduce inventory well below the levels anyone thought possible by constantly challenging and surprising ourselves with the result. We had our internal skeptics when we first started pushing for ever-lower levels of inventory. I remember the head of our procurement group telling me that this was like “flying low to the ground 300 knots.” He was worried that we wouldn’t see the trees.In 1993, we had $2.9 billion in sales and $220 million in inventory. Four years later, we posted $12.3 billion in sales and had inventory of $33 million. We’re now down to six days of inventory and we’re starting to measure it in hours instead of days. Once you reduce your inventory while maintaining your growth rate, a significant amount of risk comes from the transition from one generation of product to the next. Without traditional stockpiles of inventory, it is critical to precisely time the discontinuance of the older product line with the ramp-up in customer demand for the newer one. Since we were introducing new products all the time, it became imperative to avoid the huge drag effect from mistakes made during transitions. E&O; – short for “excess and obsolete” - became taboo at Dell. We would debate about whether our E&O; was 30 or 50 cent per PC. Since anything less than $20 per PC is not bad, when you’re down in the cents range, you’re approaching stellar performance.Find out the TRUE statement:
 ....
MCQ-> Analyse the following passage and provide appropriate answers for questions that follow. The understanding that the brain has areas of specialization has brought with it the tendency to teach in ways that reflect these specialized functions. For example, research concerning the specialized functions of the left and right hemispheres has led to left and right hemisphere teaching. Recent research suggests that such an approach neither reflects how the brain learns, nor how it functions once learning has occurred. To the contrary, in most ‘higher vertebrates’ brain systems interact together as a whole brain with the external world. Learning is about making connections within the brain and between the brain and outside world. What does this mean? Until recently, the idea that the neural basis for learning resided in connections between neurons remained a speculation. Now, there is direct evidence that when learning occurs, neuro – chemical communication between neurons is facilitated, and less input is required to activate established connections over time. This evidence also indicates that learning creates connections between not only adjacent neurons but also between distant neurons, and that connections are made from simple circuits to complex ones and from complex circuits to simple ones As connections are formed among adjacent neurons to form circuits, connections also begin to form with neurons in other regions of the brain that are associated with visual, tactile, and even olfactory information related to the sound of the word. Meaning is attributed to ‘sounds of words’ because of these connections. Some of the brain sites for these other neurons are far from the neural circuits that correspond to the component sounds of the words; they include sites in other areas of the left hemisphere and even sites in the right hemisphere. The whole complex of interconnected neurons that are activated by the word is called a neural network. In early stages of learning, neural circuits are activated piecemeal, incompletely, and weakly. It is like getting a glimpse of a partially exposed and blurry picture. With more experience, practice, and exposure, the picture becomes clearer and more detailed. As the exposure is repeated, less input is needed to activate the entire network. With time, activation and recognition become relatively automatic, and the learner can direct her attention to other parts of the task. This also explains why learning takes time. Time is needed to establish new neutral networks and connections between networks. Thi suggests that the neutral mechanism for learning is essentially the same as the products of learning. Learning is a process that establishes new connections among networks. The newly acquired skills or knowledge are nothing but formation of neutral circuits and networks.It can be inferred that, for a nursery student, learning will ...
 ....
MCQ-> Cells are the ultimate multi-taskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked ‘under the hood’ of the cell and laid out the nuts and bolts of molecular engines.The ability of such engines to convert chemical energy into motion is the envy nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a five-foot-wide engine were as efficient, it would travel 170 to 340 kilometres per hour.Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two ‘legs’. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor — molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap,” says Vale.On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a spring-like stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching three inches (eight centimetres) per second.Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use spring-like engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolutely,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, it will also enable us to copy them, take apart their components and recreate them for other purpose.”According to the author, research on the power source of movement in cells can contribute to
 ....
MCQ-> Read the  following  discussion/passage  and provide an appropriate answer for the questions that follow. Of the several features of the Toyota Production System that have been widely studied, most important is the mode of governance of the shop - floor at Toyota. Work and inter - relations between workers are highly scripted in extremely detailed ‘operating procedures’ that have to be followed rigidly, without any deviation at Toyota. Despite such rule - bound rigidity, however, Toyota does not become a ‘command - control system’. It is able to retain the character of a learning organizationIn fact, many observers characterize it as a community of scientists carrying out several small experiments simultaneously. The design of the operating procedure is the key. Every principal must find an expression in the operating procedure – that is how it has an effect in the domain of action. Workers on the shop - floor, often in teams, design the ‘operating procedure’ jointly with the supervisor through a series of hypothesis that are proposed and validated or refuted through experiments in action. The rigid and detailed ‘operating procedure’ specification throws up problems of the very minute kind; while its resolution leads to a reframing of the procedure and specifications. This inter - temporal change (or flexibility) of the specification (or operating procedure) is done at the lowest level of the organization; i.e. closest to the site of action. One implication of this arrangement is that system design can no longer be rationally optimal and standardized across the organization. It is quite common to find different work norms in contiguous assembly lines, because each might have faced a different set of problems and devised different counter - measures to tackle it. Design of the coordinating process that essentially imposes the discipline that is required in large - scale complex manufacturing systems is therefore customized to variations in man - machine context of the site of action. It evolves through numerous points of negotiation throughout the organization. It implies then that the higher levels of the hierarchy do not exercise the power of the fiat in setting work rules, for such work rules are no longer a standard set across the whole organization. It might be interesting to go through the basic Toyota philosophy that underlines its system designing practices. The notion of the ideal production system in Toyota embraces the following -‘the ability to deliver just - in - time (or on demand) a customer order in the exact specification demanded, in a batch size of one (and hence an infinite proliferation of variants, models and specifications), defect - free, without wastage of material, labour, energy or motion in a safe and (physically and emotionally) fulfilling production environment’. It did not embrace the concept of a standardized product that can be cheap by giving up variations. Preserving consumption variety was seen, in fact, as one mode of serving society. It is interesting to note that the articulation of the Toyota philosophy was made around roughly the same time that the Fordist system was establishing itself in the US automotive industry. What can be best defended as the asset which Toyota model of production leverages to give the vast range of models in a defect - free fashion?
 ....
Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use
DMCA.com Protection Status Powered By:Omega Web Solutions
© 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions