Wednesday, May 20, 2020

Comparing Thoreaus Civil Disobedience and Orwells 1984...

Civil Disobedience and 1984 In Orwell’s 1984, the government is all controlling, all manipulative, and all knowing. They maintain every aspect of their member’s lives and monitor them constantly. Conversely, in the context of Civil Disobedience, the government is a form of direct democracy. People have their right to vote and the right to openly express their opinions. The main character of 1984 lives in constant fear of his government while Thoreau argues with his and suggests a variety of ways to cause reformation, he has the freedom of expression much unlike Winston. This is an essential point when trying to suggest any of Thoreau’s ideas to reform 1984 socialistic government. There is also no hope of rebellion from actual party†¦show more content†¦Being a member of the Party, Winston must maintain constant loyalty to the government, or at the very least, sustain the appearance that he does. â€Å"A Party member lives from birth to death under the eye of the Thought Police. Even when he is alone he can be sure that he is alone.† (Orwell, p. 210) With the incorporation of spies along with Telescreens, Winston has no true privacy. He couldn’t decide to completely or even remotely remove himself from the government because he would immediately be caught by the Thought Police. Not only that, but he continues the unjust action of rewriting and reworking pieces of media to accommodate the need for Big Brother to always be right. This is his government job and with he that, he works for a cause he stands against. The idea of disassociating oneself from the Party is irrational, their government supplies food, clothing, housing, and a purpos e. Although the war is a huge faux to maintain all its members with poverty, they are still dependant of the Party to survive. To leave the Party is to openly admit Thought Crime—the same as committing suicide . When Winston and Julia are captured by the Thought Police and sent to the Ministry of Love, one assumes they shall soon endure ruthless torture before being killed, but that is not truly the case. The two

Monday, May 18, 2020

Maximizing Shareholders Wealth As A Primary Corporate Objective Finance Essay - Free Essay Example

Sample details Pages: 8 Words: 2306 Downloads: 6 Date added: 2017/06/26 Category Finance Essay Type Research paper Did you like this example? Shareholders wealth is basically the value of shareholders ownership of shares in a firm at a particular period of time. Shareholders wealth is measured by two major financial concepts, namely: by capital gain increase, which results from an increase in the prices of shares or by increase in dividend payments. Consequently, maximizing the capital gains or dividend payments of a firm can maximize shareholders wealth. Don’t waste time! Our writers will create an original "Maximizing Shareholders Wealth As A Primary Corporate Objective Finance Essay" essay for you Create order Notwithstanding, an optimal level needs to be maintained, as a firm needs to balance the risk and return involved in growing its capital gains or dividend payments arising from the firms investing activities. On the other hand, managing the firm requires engaging on purposeful business activities on the part of top management staff members and members of the board of directors. In turn, all purposeful business activities are aligned with corporate goals and objectives. Parsons (1960) have argued that the firm, as a legal corporate entity, is a collectivity whose defining characteristic is the realization of a specific goal or purpose. On this premise, it is therefore evident that it would not be sensible enough to believe that a discussion on corporate entities would be complete without an unequivocal mentioning of the normative and positive scopes of the corporate objectives of the firm let alone the structure and processes of its governance. Therefore, questions that often arise in clude: Who should the firm serve? Who does it serve? The debate on corporate purpose is by no means a relatively new concept in the financial literature, however, there have been varying levels of considerations from the different fields of studies (ranging from management and business strategy to law and ethics) on the issues of purposes and accountability of a corporation, and sharply differing views still exist. In finance, the debate is established on the notion of shareholders value maximization (and in economics, an equivalent notion arises as the maximization of private wealth in a competitive economy). The notion of shareholders value maximization has been widely and generally accepted in the financial world, and has formed part of very strong assertions in the financial literatures and textbooks. A deviation from this corporate objective is typically thrown in the light of an agency problem, which results from the debate on the separation of ownership and control, whi ch is an immediate integral feature of the modern corporate financial practice. Berle and Means (1932) emphasized on the problems of managerial carefulness and self-dealing when handling issues that pertain to the preservation of shareholders wealth under the regime of the principle of separation of ownership and control, as a major issue that characterizes the widely held belief about corporations. Based on assumptions of property rights in democratic capitalist societies, Berle and Means (1932) premised their arguments on the view that managing the firms business activities on behalf of the shareholders was the prerequisite of managerial decision-making, since shareholders were property owners. In Brealey and Meyers (2000: 24-26), for instance, the assertion that ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦a financial manager should act in the interest of the firms owners stockholdersÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ is not an accident. Like every human, each stockholder (or shareholder) d esires to be as rich as possible, and craving for the need to be able to seamlessly transform that wealth generated from investing in the firm into whatever time pattern of consumption of his or her choice, and choosing the risk characteristics of the consumption plan. In situations where the firms management fails to collaborate with the shareholders on their interests in the firm, it amounts to an intervention by the firms corporate board; or by verbal articulation, whereby shareholders can call for a meeting to replace the corporate board; or by exit, a situation whereby shareholders dispose their stake in the firm selling off shareholders stakes in a firm can send a powerful signal to the firms entire system and its immediate environment; or by a collective shareholders decision to remove top management members through the market system for corporate control. Notwithstanding, it should be noted that in the situation whereby managers and directors do not maximize their value of (or stake in) a firm, it usually results to the threat of a hostile takeover by competitors. Rappaport (1986) has provided a more simplified assertion on how shareholders wealth creation should be viewed in relation to a firms corporate objectives. Therein, it was held that any management that contravenes the objective of maximizing shareholders value, no matter how influential or independent, does so at its own risk. This can be taken seriously since shareholders make up the most power reference point within a corporate organization where managements financial power is derived. In contrast to the finance view, in recent years, scholars in the management and strategy discipline have increasingly leaned towards one of two overlapping viewpoints that are sharply at contrast with the financial view of shareholders value maximization. One of the viewpoints is that governance should be understood using a stakeholders lens. The second viewpoint pursuits that rather than debating whe ther stakeholders or shareholders matter, corporate organizations should have multiple goals existing in a convoluted hierarchy (Freeman and McVea, 2001; and Quinn, 1980). Similarly, Drucker (2001) argued that Shareholder sovereignty is bound to struggle; ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦it is a fair weather model that works well only in times of prosperity (Drucker, 2001:17). On this note a constructive conclusion can be drawn. Following the importance of preserving shareholders wealth while ensuring good governance, it is therefore imperative that top management of the firm should strike a balance amongst the three scopes (or objective functions) of the corporate organization, which involve regarding the corporation: as an economic organization, whose aim is to maximize profit (or returns on investment); as a human organization, which should form a seamless relationship with other human organizations within its immediate environment, and without the fear of domination of one on the other; and as an increasingly important social organization that cares and prides itself about corporate social responsibility to the immediate community to which it belongs and/or operates from. The Role of the Efficient Market Hypothesis in the Post-Financial Crisis Period The sharp economic slump in the financial markets around the globe, typically and generally referred to as the global financial crisis, has generated a remarkable spate of blames on the active market players (banks and other financial institutions as well as consumers, surprisingly) from different economic stakeholders the free market economics has been attacked vigorously. Particular attention has been paid on the notion of the Efficient Market Hypothesis (EMH) an idea that supports that competitive financial market should exploit all available market information when setting security prices. EMH asserts that the financial market is informationally efficient. In other words, given the publicly available market information at the time of making an investment, one cannot achieve returns in excess of average market returns on the risk-adjusted basis consistently. Since the wake of the recent financial crisis many people have called for careful scrutiny, revamped criticism and ev aluation of the EM hypothesis. In fact, the crisis has urged many to conclude that the excessive negligence in the proper regulation and supervision of the financial market activities due to the immensely mistaken belief in the supremacy of the thought behind the EMH, gave rise to the current financial crisis. For instance, Jeremy Grantham popularly referred to as the market strategist has stated, without reservation, that EMH is responsible for the global financial crisis that currently rocks the world financial markets. In his claims, he believes that the general acceptance of the idea behind EM hypothesis led financiers to have a habitual underestimation of the underlining dangers surrounding the breaking of asset bubbles. Justin Fox, the Myth of the Rational Market, as he is fondly called, appears to support the same claim made by Jeremy. Ray Ball wrote: ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦swayed by the notion that market prices reflect all available information, investors and regul ators felt too little need to look into and verify the true values of publicly traded securities, and so failed to detect an asset price bubbleÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ The following excerpt was also taken from Ball (2009: 11) (cited from the UKs Turner Review): The predominant assumption behind financial market regulation-in the US, the UK and increasingly across the world-has been that financial markets are capable of being both efficient and rational and that a key goal of financial market regulation is to remove the impediments which might produce inefficient and illiquid marketsÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦. In the face of the worst financial crisis for a century, however, the assumptions of efficient market theory have been subject to increasingly effective criticism. Others who also believe that the EMH is not unconnected with failure of the financial system include the financial journalist, Roger Lowenstein who stated that: The upside of the current Great Rec ession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis. The chief economics commentator of the financial times, Martin Wolf, has dismissed the EM hypothesis on the premise that the hypothesis is a useless way of carrying out a careful examination of the functionality of the market. Nevertheless, Paul McCulley, the MD of PIMCO, said that the hypothesis did not fail but was seriously flawed in neglecting human behaviour. According to Ball (2009:11), the depiction of what the EMH portrays in the mind of regulators makes sense in one respect. Stating that regulators can focus well enough in ensuring an adequate flow of reliable information to the public where, however, the market can be relied upon in incorporating public information into asset prices, while less attention is paid on investors propensity to invest even in the riskiest assets without fear of losing the lots. This view is, however, consistent with the fa ct that in recent times there does appear to have been increased emphasis on ensuring adequate and fair public disclosure by regulatory and supervisory bodies worldwide. However, the notable Robert R. McCormick Distinguished Service Professor of Finance at the University of Chicago Booth School of Business and grand proponent of the EM hypothesis, Eugene Fema has refuted the above claims but stated that: the hypothesis held up well during the crisis and that the markets were a casualty of the recession, not the cause of it. Ball (2009:2) says: I have argued in the past and will argue below that the EMH like all good theories has major limitations, even though it continues to be the source of important and enduring insights. Despite the theorys undoubted limitations, the claim that it is responsible for the current worldwide crisis seems wildly exaggerated. If the EMH is responsible for asset bubbles, one wonders how bubbles could have happened before the words efficient market was first set in print and that was not until 1965, in an article by Eugene FamaÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ But all of these episodes occurred well before the advent of the EMH and modern financial economic theory ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ Its only the idea of market efficiency that is relatively new to the scene. After all said and done, one would like to know what awaits the EM hypothesis in the post-financial crisis era. As the saying goes: you do not throw away the baby with the bath-water. There is need to relax and critically evaluate the entirety of the EM hypothesis in relationship to what it can help achieve in the market and the limit inherent in its application in ascertaining how the market behaves. It is overtly true that anomalies in the market efficiency hypothesis abound. These include over-reactions of prices and excess volatility; under-reactions of prices and momentum, especially with respect to earnings annou ncements; the relation between future returns and many variables such as accounting accruals, market-to-book ratios, price-earnings ratios, market capitalization, and dividend yields; and seasonal patterns in returns. One should therefore expect that while not entirely relying on the EM hypothesis in assessing market activities, the hypothesis would still be expected to hold sway. This is consistent with the results of Aroskar, et al (2004) and Kan and Andreosso-OCallaghan (2007). Furthermore, market regulators are rather expected to carry out proper regulatory functions irrespective of the presence or absence of the market hypothesis. A Reflective Statement After a critical discussion of these sorts (first, it was the discussion on: the adoption of maximizing shareholders wealth as a primary corporate objective, which was then followed closely by a discussion on: the role of the efficient market hypothesis in the post-financial crisis period) one would wonder at the efficacy of what could be achieve with such a short piece. Anyway, it is not the length alone that matters when crucial issues like the ones discussed in this piece are considered. While length may be important, what matters the most is the depth of what has been discussed. Going back, two schools of thoughts were covered in the first section, namely: those that believe that the adoption of maximizing shareholders wealth should form a core part of the corporate objectives of any corporate organization, and those who believe that corporate governance should be discharged under the watchful eyes of the stakeholders while having an organization that is built around a compl ex hierarchy of a collection of goals. One would then like to take a stance between these two schools. Notwithstanding, there is sense in both thoughts; however, a deep look and critical evaluation of both thoughts may likely reveal a common ground. Therefore, one should not be tempted in judging the supremacy of one school over the other. Now with the discussion on the EM hypothesis in relation to its role in the financial market after the crisis, it should be proper to evaluate its worthiness just like every theory ever propagated was evaluated. Like Ray Ball had said one could not blame a theory for people misusing it, as every theory comes as an abstraction no theory can be taken in its raw and literal form.

Wednesday, May 6, 2020

Effects Of Marijuana And Its Effects On Cancer Cells

The process by which ∆^9–Tetrahydrocannabinol, more commonly known as THC, a major component of Cannabis sativa, has been found to possess anti-tumor properties of many cancer types. However, the use of THC is limited; particularly its usage during chemotherapy due to its psychotropic activity, the ability to affect mental activity, behavior, or perception. In addition, the exact mechanism in which THC produces this activity is not fully known. For these reasons, there has been debate about its incorporation as a common treatment for cancer. There is growing evidence that some pharmacological effects of marijuana are due to Cannabis components other than THC. C. sativa contains at least 400 chemical components, 66 of which have been†¦show more content†¦Still, in the case of the endocannabinoid anandamide, it has been shown to produce its effects on cancerous cell growth via a mechanism utilizing the transient receptor potential vanilloid type-1 (TRPV1) receptor s and noncannabinoid, nonvanilloid receptors ¹. Furthermore, cannabidiol supposedly inhibits growth of glioma through a completely and independent mechanism in vitro and in vivo. Today, cannabinoids have been effectively used to treat the two most prominent side effects of chemotherapy: nausea and vomiting. The main reason that the use of THC is slim in the future is principally due to the effects it produces within the central nervous system. These effects include: perceptual abnormalities, occasionally hallucinations, dysphoria, abnormal thinking, depersonalization, and somnolence (long periods of sleepiness or drowsiness). ¹ One way to dodge these effects is in the use of non-THC plant cannabinoids, which do not seem to produce psychotropic effects. Canabidiol, for example, is considered to be nonpsychotropic. A proposed method cannabidiol allays this effect is by preventing its usual conversion to the more psychoactive 11-hydroxy-THC. Recently, scientists have found that systematic variations in constituents of THC (i.e., cannabidiol and cannabichromene) do not affect the behavioral or neurophysiological responses to marijuana ¹

Evaluation - Validation and Design Analysis Light Rail Transit

Question: Describe about the Evaluation, Validation and Design Analysis for Light Rail Transit. Answer: Introduction As cities continue to increase both in size and population, the available transport systems are increasingly becoming under pressure as their design capacities are overrun. In addition to that, a higher level of pollution from the larger number of the automobile becomes a concern to both the public and the authorities. This has prompted various governments to come up with a solution inform of Light Rail Vehicles (LRV), which can carry a higher number of passengers thereby significantly reducing the higher number of vehicles on the road. Besides, the LRV are environmentally friendly as they rely on electrical energy other than fossil fuels. For most developing nations, this has proven to be a preferred and attractive option for urban public transport. However, just because LRV or Light Rail Networks (LRN) has proved to be successful in other countries does not guarantee that it will work anywhere. The success of any LRN project lies from how the project will be handled from feasibilit y studies, planning, designing, testing, and commissioning, evaluation, validation and human factors among other activities of the entire project lifecycle. All these activities need to be managed properly to achieve the goals of the LRN and minimize or eliminate extra costs during the operation and maintenance of the railroads. This report, in particular, will address the testing, evaluation, and validation of the LRN in addition to looking at how design can be optimized to improve the reliability and maintenance, aside from the human factors. Light Rail System Testing Evaluation and Validation Sharma (2011) points out that the Testing and Commissioning (TC) of a light rail system starts after the concept and detailed design phase. The primary purpose of TC is to ensure that the technical and project requirements are met, and this can be done parallel to evaluation and validation. For testing to be successful, it is vital to involve the employer (owner of the infrastructure), the contractor (and sub-contractors if any), manufacturers or suppliers of the equipment, the railway operator, and to an extent, though not compulsory, a third party organization outside the main contract. As Sharma (2011) states, it is the responsibility of the employer to provide the basic framework outlining the TC process and who is responsible for overseeing it. The contractor will then develop a detailed test plan that defines all the rail systems, their interfaces, tests to be carried out, and the expected results according to the employer and the approval bodies. The test plan also defines the reporting and authorizing procedures for all tests, the schedule, resources (equipment/staff) required for each test, the safety, and documentation for all tests carried out. The ideal model employed during the TC is the FAT-SIT-SAT-SATOV, simply split into four stages namely: Factory Acceptance/Inspection Test (FAT), Site Installation Test (SIT), Site Acceptance Test (SAT), and Overall Site Acceptance/Performance Test (SATOV). Factory Acceptance/Inspection Test (FAT) In this stage, all the light rail equipment and components are tested at the factory/manufacturers site during the production. This is meant to ensure that the equipment and components meet the specifications and requirements for the design and overall project. Tests of all equipment are carried out concerning the systems software and hardware (Sharma, 2011). When it comes to hardware, there are two forms of tests that the contractor is required to perform routine and type tests. In a routine test, each piece of hardware component or equipment is tested independently. Some of the tests could include checking for dimension, insulation, electrical conductivity, mechanical, calibration, hydraulic, and visual inspection among other compliance tests before the equipment is released to the contractor. On the other hand, type tests are done on the complete equipment of a given type or rating according to the set standards or technical specifications as stated out in the contract. In most ca ses, these form of hardware test include testing the mechanical strength of the hardware, the electrical characteristics, compatibility of the electromagnetics just to mention a few. For every software system like vehicle detection system, line signaling, or supervisory control, it is recommended that a test bench should be used to simulate the inputs and outputs in an environment matching the real operation environment. Furthermore, integration testing of all the rail system should be carried out at the factory site to minimize or eliminate possible integration risks of equipment during the assembly at the construction site. Site Installation Test (SIT) Here tests are done on equipment after their installation on the site. The purpose of SIT is to ensure that all sub-systems or equipment are installed and wired correctly and that they can perform the intended operation without any damage after the installation. The tests can be performed in phases on a site by site basis as the various sections of the railway line get built. Not that the sections can be defined as per the infrastructure or line constraints like crossover locations, track layout, overhead contact system, or the location of the sub-stations (Sharma, 2011). For train onboard equipment, SIT must be conducted on the train both at the manufacturers and the employers site. Examples of tests carried out in this stage include stand-alone operation tests, electrical conductivity and insulation, and data exchange or communication tests. Site Acceptance Test (SAT) SAT, considered as a pre-commissioning stage, is done when all equipment and sub-systems have been installed to identify and minimize the modification and related costs at a later date. It is more of an integration test. It is of the utmost importance to demonstrate that all the functional and performance requirements are met. This stage can be sub-divided into SAT-internal and SAT-external. According to Sharma (2011), in SAT-internal, all the systems are put under a pre-defined scope whereas in SAT-external at least one of the system to undergo integration test lies outside the predefined scope. This can be based on the complexity of the relationship or interface of the project with other third parties, the type of contract or the contractors or sub-contractors involved. Just like in SIT, tests for onboard train systems like vehicle detection system are done at this stage. Overall Site Acceptance/Performance Test (SATOV) The goal of this test is to ascertain that the entire system will operate accordingly and offered the required service without any hiccup. The railway operator in addition to all project parties must be involved in this stage as all the functional requirements of the system and equipment when in service are to be tested. The tests in this instance can be split into SATOV-Equipment tests done on all equipment supplied to the project and SATOV-Line which are tests done on the equipment or system when in actual train running for a given trial period. Usually, the SATOV-Line is a responsibility of the employer and the railway operator with technical support from the contractor. Some of the tests conducted in this stage include the full load tests, functional tests, degraded mode, and endurance tests. From these results, employers or contractors can be able to evaluate the actual performance of the system in relation to requirements and expectations outlined in the contract. Once the syst em has passed all the tests and evaluation, the contractors can now hand over the railway system to the employer and operator. Optimization in design and operations of Light Rail Transit (LRT) According to Twum and Aspinwall (2013), reliability is a measure of the ability of the system to carry out its intended function without fail for a particular period in a given pre-determined conditions. Reliability has far reaching consequences on the availability, durability, and life-cycle cost of a system. For this reason, engineers are required to make informed decisions on the components and design configurations that are to be used in a system (Selvik and Aven, 2011). In order to optimize the design and operations for reliability and maintenance of LRT, it is vital to identify first the factors which can impact on the efficiency and effectiveness of the LRT namely: route design, right-of-way (ROW), and track layout and configuration. The design of the LRT is most likely to affect the reliability of operations and impact on the ridership regarding the number of passengers opting for LRT. Given that most urban centers opt for LRT for public transportation so as to reduce traffic congestion and minimize automobile pollution, it is recommended that LRT routes should connect high activity regions along major corridors, highways, and arterials (LRT: Light Rail Transit Service Guidelines, 2007). These regions could include airports, employment centers, shopping centers, education institutions, or high-density residential areas. In cases where new LRT lines are to be developed, the new lines should intersect with old lines so as to enable multiple transfer opportunities for light rail cars. The type of ROW has a significant influence on the operation and speed of the LRT system given that in urban areas they have to interact or crisscross with pedestrians and other means of transport. To improve LRTs reliability, safety, and operating speed, LRT should operate within designated semi/fully-exclusive ROWs. Also, in areas where there are shared ROWs, Li et al. (2007) propose the use of a Mixed-integer quadratic programming (MIQP) model for signal timing at rail intersections to reduce traffic delays and its impact on LRT and other traffic. About track layout and configuration for reliability, double tracking is touted as an optimal operation environment. This is because it allows for bi-directional LRT lines to operate simultaneously along the same segment of the track while at the same time allow for bypassing of stationary or disabled trains at switches and crossovers. Besides, compared to ballasted tracks, non-ballasted (slabs) LRT tracks which have advantages such as l ower maintenance cost and requirements, increased durability (service life), and high lateral track resistance that permits the increase of speed in future is favored (Fazhou Wang and Yunpeng Liu, 2012; Ć iroviĆ¡ et al., 2014). Maintainability According to Langford (2007), system maintainability refers to the measure of its ability to be restored to the usual operational level after a planned or unplanned interruption within a given time using the available resources. This maintainability is mostly considered as design related and is usually carried out to give estimates of system maintenance, downtime, and resources required to carry out maintenance. This will help in optimization or reduction of the time and cost of maintenance works. Note that maintenance can be either corrective which are the unplanned actions to restore system performance after a failure or preventive maintenance which are planned actions to maintain or improve a system performance. In that regard, maintainability is usually measured by Mean Time to Repair (MTTR), Mean Time Between Maintenance (MTBM), and Mean Time Between Failure (MBTF). MTTR is the average amount of time it will take to repair a system and restore services. It is used to calculate maintainability in corrective maintenance. Mathematically, this can be expressed as the total maintenance time divided by the number of repairs conducted over a given period. Langford (2007) points out that we can determine the probability of carrying out repairs within a specified time by using the following formula: M(t) = 1 e-t/MTTR This is critical for reliability or maintenance engineers as it helps them decide whether to replace or repair a system or optimize the maintenance schedules. In the end, this will impact on the availability of system, in this case the LRT, when maintenance is carried out. On the other hand, MTBM is the average time between maintenance actions in consideration of the meant time between corrective maintenance (MTBMct) and the mean time between preventive maintenance (MTBMpt). Langford gives the following mathematical expression: Figure 1 MTBM MTBF is also a measure used by engineers during the design to enhance safety of systems and equipment thereby giving an indication of their performance, reliability, and availability. It is the mean time between recorded system failures. MTBF can be determined by calculating the mean of the difference between start of system downtime and uptime and diving the results by the number of failures as shown below. Figure 2 MTBF This is useful in projecting the likelihood of a particular equipment or system to break down with a given time interval. Human factors in designing concepts Human factor is considered as a discipline whereby knowledge generated by ergonomics, psychology, physiology, and sociology are applied in the improvement of the interaction between humans and technological systems. Its purpose is to understand the capabilities and limitations of the human beings and implement these findings into developing more safer and efficient technological systems. They play a significant role in transportation systems concerning customer experience, safety, operations, and maintenance. Due to this, any designer must treat human factors as an important element in the design process. According to Naumann et al. (2013) and Wilson et al. (2012), railway system designs have to consider human operators as a key impact factor in its operations. Some of the human aspects of the workplaces relating to the design of systems that should be looked into include the perception, communication, attention distribution, cognition processes and overload, vigilance, reaction to stressful situations among others. Even with the current high level of automation in railway systems, train drivers, system operators, and traffic operators still, play a key role in the provision of vital information in designing systems. Wilson et al. (2012) further note that when it comes to light rail systems, there is a higher workload and stressful situations for train drivers given the characteristics of suburban and urban areas. In these areas, there is a high number of train stops, sharing of platforms with other transportation means, interaction with passengers, etc. that could affect the concentr ation of the drivers. For this reason, it requires that the systems should be designed with much care. According to Wilson et al. (2012), given the implementation of on-board information systems in new urban rolling stocks that monitors and warns the driver on the status of the train, this at times causes high workload. As a result, the design and assessment of on board information systems should be based on cognitive design guidelines. In addition, given that rail cars at some points share the platforms with pedestrians as well as vehicles, it is recommended that the design of the cab should provide good visibility as per the driver anthropometrics via the windscreen to improve on early detection of danger and reaction to emergency cases. Furthermore, ergonomic assessment in the evaluation of the visibility, posture, and workplace health risk in addition to the use of heuristic models in the evaluation of human machine interface is vital in designing safe rail systems. As Rail Engineer state, even though humans can be good at adapting to different circumstances by rapid thinking and reaction, they are not best placed in handling the stress and work overload in emergency situations where immediate intervention is requisite. So as to ensure that these issues are addressed, it is essential that all aspects of the Human Machine Interface during the design phase are considered to recognize their impact on system function and performance. This can be achieved by designers by conducting interviews, administering question, or prototype assessment with all railway operators who will provide concrete feedback required to design efficient systems that are easy to operate with high performance (Schwencke et al., 2013). Conclusion and recommendation Light Rail Transit are increasingly being adopted by cities as a solution to traffic congestion in most urban areas given the high passenger capacity and high speed. In addition, compared to automobiles, there are fewer accidents or fatalities recorded of the LRTs. Besides, their eco-friendly nature is making them more attractive to environmental conscious cities. However, successful implementation of the LRTs projects requires extensive design processes that factor in the human factors to make them more reliable and safe. As discussed, it is, therefore, vital that system testing and evaluation should be performed on all components and equipment. This will help reduce addition costs that could arise due to faulty components or accidents. The testing and commissioning of such projects should strictly adhere to the FAT-SIT-SAT-SATOV model as previously discussed. In reference to operations optimization for the reliability of LRT, as a recommendation, an extensive analysis or feasibilit y on the areas where the light rail network will pass. This will provide concrete feedback on how best to design and lay the tracks in areas with a high trip generation with minimal interruption to other traffic. References Ć iroviĆ¡, G., MitroviĆ¡, S., BrankoviĆ¡, V. and TomiÄ iĆ¡-TorlakoviĆ¡, M. (2014). Optimisation and ranking of permanent way types for light rail systems. JCE, 66(10), pp.917-927. Fazhou, W., and Yunpeng, L., (2012). The Compatibility and Preparation of the Key Components for Cement and Asphalt Mortar in High-Speed Railway. INTECH Open Access Publisher. Langford, J. (2007). Logistics: Principles and Applications, Second Edition. 2nd ed. McGraw-Hill Education, pp.55-70. Li, M., Wu, G., Li, Y., Bu, F. and Zhang, W. (2007). Active Signal Priority for Light Rail Transit at Grade Crossings. Transportation Research Record: Journal of the Transportation Research Board, 2035(16), pp.141-149. LRT: Light Rail Transit Service Guidelines. (2007). 1st ed. [ebook] New York, USA: The National Association of City Transportation Officials (NACTO). Available at: https://nacto.org/docs/usdg/lrtserviceguidelines_vta.pdf [Accessed 16 Sep. 2016]. Naumann, A., Grippenkoven, J., Giesemann, S., Stein, J. and Dietsch, S. (2013). Rail Human Factors- Human-centred design for railway systems. In: 12th IFAC Symposium on Analysis, Design, and Evaluation of Human-Machine Systems. Las Vegas, NV, USA: IFAC Publisher. Rail Engineer. (2014). Automation in railway control The human factors. [online] Available at: https://www.railengineer.uk/2014/03/10/automation-control-factors/ [Accessed 17 Sep. 2016]. Schwencke, D., Grippenkoven, J., and Lemmer, K. (2013). Modelling human-machine interaction for the assessment of human reliability. Rail Human Factors 2013 Proceedings. London. Selvik, J. and Aven, T. (2011). A framework for reliability and risk centered maintenance. Reliability Engineering System Safety, 96(2), pp.324-331. Sharma, R. (2011). TESTING AND COMMISSIONING PROCESS FOR A LIGHT RAIL PROJECT. 1st ed. [ebook] Solihull, United Kingdom: Ove Arup Partners Ltd, Infrastructure and Planning Midlands (Rail). Available at: https://www.theiet.org/communities/railway/best-papers/documents/light-rail- paper.cfm?type=pdf [Accessed 15 Sep. 2016]. Twum, S. and Aspinwall, E. (2013). Models in design for reliability optimisation. American Journal of Scientific and Industrial Research, 4(1), pp.95-110. Wilson, J., Mills, A., Clarke, T., Rajan, J. and Dadashi, N. (2012). Rail human factors around the world. Boca Raton, Fla. [u.a.]: CRC Press/Balkema

Accounting and Financial Management Fiscal Crisis

Question: Discuss about theAccounting and Financial Managementfor Fiscal Crisis. Answer: The recent fiscal crisis has resulted in huge debate concerning the fair value accounting. Several critics have put forward their argument by stating that fair value accounting which is often known as market-to-market accounting that has considerably contributed to the fiscal crisis or have at least aggravated it brutality (Rey, 2015). Fair value accounting generally consists of reporting of assets and liabilities on the balance sheet at their given value and recognizing the changes in the fair amount in the form of gains and losses in the income statement. When marketplace is used in the determination of fair value then it is known as market value accounting. Critiques have put forward their argument that fair value accounting was majorly responsible for global financial crisis. The major accusations are that fair value accounting attributes significant amount of leverage during the boom period and results in extreme amount of write-down in busts. The write-down generally leads to declining marketplace prices, which ultimately depletes the bank capital by setting a descending spiral since banks were forced to dispose off their assets based on fire sale prices (Haas Lelyveld, 2014). This in turn results in contamination as the price asset fire sales of one bank turns out to be relevant in the other banks. Commencing from 2007 falling house prices arising out of the defaulting sub-prime borrowers, closure of mortgage fraud have resulted in problems relating to mortgage securities. This ultimately results in making the mortgage instrument complex. As the housing prices began to fall and default rate of mortgage started to increase since the marketplace for such kinds of securities have dried out for causes that was not related to accounting. As the rate of defaulters began to increase, due to the commencement of crisis these hedge funds noticed a large amount of outflow of capital during the middle 2007 (Bntrix et al., 2015). Large number of organizations originated these kinds of investment funds stopped the process of withdrawals and did not allowed redemptions of their investment funds. One may argue by stating that fair value accounting player a significant role in the conclusion of monetary institution so that they can bail out their investment funds. Supposedly, monetary organizations have expressed their fear by stating that trading on asset funds and assets in the illiquid market might have led to depression in prices and may have forced writing off assets that are held by the other asset funds or by themselves (Reddy et al., 2014). The fear of contamination may have played an important role in the decision-making but it is in doubt whether this was the first order that affected the investment bank. For investment banks concerns regarding their reputations, as they doubted that if one funds fails there might be the fear for further withdrawal of funds which were of great importance. To be more precise, the monetary problems of investment bank at the time of crisis are understood as the outcome of insufficient investments, short-term financing and high leverage. The shareholders have expressed their concern regarding the worth of fundamental assets instead of violent writing down of assets forced by the fair value accounting (Claessens Van, 2015). Finally, to conclude with it is improbable that fair value book-keeping stimulated global financial crisis since researchers have formed a positive association by stating that financial crisis was mainly determined by short term collateralized borrowing. During the beginning phase of the global financial crisis the controllers have looked accounting standards setters especially the IASB and FASB concerning the involvement, which it had made in the measurement of fair value of fiscal instrument, had on the fiscal crisis (Cavusgil et al., 2014). During calculation of fair value measurement inadequate considerations was regularly given to the various factors the instrument of fair value. Realistic problems have originated in the determination of fair value instrument particularly in the cases where marketplace have either distorted or vanished. Several observers have criticised that there is inadequate guidance particularly in the face of illiquid market. The criticism consisted of inadequate viewing of the inputs and models, which was used to conclude the fair value of instrument. Widespread leadership has been consequently released on the assessment of fair value in the illiquid marketplace and the policies containing risk, which is recommended in the determination of the fair value (Goh et al., 2015). The abundance of exotic products in the form of collaterized debt obligations were necessarily required to measure the fair value of chosen by the organization to be held at fair value which might help in recognising the instability of profit and loss. A huge amount of criticism was drawn at the IASB since several stakeholders were in the opinion that inadequate revelation was provided on numerous elements in the measurement of fair value. This included the sensitivity of inputs in the assessment of the fair value along with the impact of fair value measurement on the profit and loss (Bowen Khan, 2014). IASB has of late issued amendments to the current IFRS 7 that was based on the disclosure of financial statement. The amended disclosure, which was necessary for the entities with the financial reporting periods commenced on or after the 1st January 2009, and it is based on the US GAAP standard FAS 157, fair value measurement. The alterations to the IFRS comprised of the necessity of classification in the fair value measurement of the financial instrument under the three stage hierarchies of measurement. The three stages of fair value measurement hierarchies where level one consisted of obtaining fair value directly from the market price (Blankespoor et al., 2013). The second level comprised of deriving fair value mainly from the marketplace prices but with minimum amount of unobservable marketplace inputs. The third level of the hierarchy consisted of principally deriving value of instrument from the unobservable market inputs, which included valued off models. The modifications that were made to the standard comprised of making revelation of alterations among the three levels of dimensional hierarchies along with the detailed understanding of the sum that was recognised in the third level financial instrument identified at fair value. It also consists of making revelation of sensitivities to alterations in inputs based on the fair value assessment of fiscal instrument (Amel er al., 2016). The volume of work that is required to act in accordance with the standard and must not be undervalued since it need categorization and revelation of all fiscal instruments. In the decline of 2008, unstable financial markets have prompted actions by Australian Accounting Standard Board and the International Accounting Standard. One of important issues that was addressed by AASB was based on the off-setting the balance sheet accounting that was widely used in securitization (Hull White, 2014). The current accounting standard officially recognized off-balance sheet for passive entities and special purpose enterprise that existed in order to hold the incoming payment on those assets and passing down the payment to investors in the securities of those entities. The setters of standard have been criticised as why the accounting standards authorized definite dealings to be derecognised from the balance sheet and providing allowance to special purpose vehicle, which is established by the group not to be consolidated. This ultimately resulted in loan obligations, associated financial assets and profits or losses arising out of this vehicle not being included in the financial results of the group. In order to assess whether the control existed, a legalistic approach was followed which resulted in positive commitments. In relation to de-recognition of financial instrument, the current standard was written by the IASB and FASB comprised of multifaceted set of rules, which resulted organisations to assess precise transactions. To de-recognise the financial instrument off their balance sheet AASB has constantly attempted to create a principle based standards (Ettredge et al., 2014). Furthermore, an exposure draft was issued by AASB, which included t he procedure of determination of fair value of financial liabilities. In adopting the IASBs standard, the overall approach of AASB is to converge the content and wording of AASB. The convergence of IASB included emphasis on the profit making entities. AASB was accountable for setting up the accounting standard of all types for reporting entities. It is found that AASB standards have dealt with limited cases where there was a need to have different or supplementary requirements for non-profit making entities. The additions that were made did not create an impact on the requirements of the profit entities (Bischof et al., 2014). In developing the new or amended IFRS, the AASB released its exposure draft that contained those proposed changes and exclusively provides an invitation for comments from Australian equivalent to an IFRS, which is affected by the Australian environment. The influence of IASB led to AASB to adopt several additions and deletion from the standards. This included deletion of optional treatments from the IFRS when the existing standards allowed only one of those treatments. The AASB has undertaken the decision of making the Australian requirements similar to that of the IFRS in respect of the profit entities (DeJager, 2014). To attain this objective AASB has proposed the removal of large number of differences from the IFRS other than those that are dealt in specific non-profit making entity. The AASB proposed an options that presently existed in the IFRS and should be included in the Australian equivalent IFRS with additional disclosure must be eradicated other than those which was considered important in the Australian reporting environment. Reference List: Amel-Zadeh, A., Barth, M. E., Landsman, W. R. (2016). The Contribution of Bank Regulation and Fair Value Accounting to Procyclical Leverage. Bntrix, A. S., Lane, P. R., Shambaugh, J. C. (2015). International currency exposures, valuation effects and the global financial crisis.Journal of International Economics,96, S98-S109. Bischof, J., Brggemann, U., Daske, H. (2014). Fair value reclassifications of financial assets during the financial crisis. Blankespoor, E., Linsmeier, T. J., Petroni, K. R., Shakespeare, C. (2013). Fair value accounting for financial instruments: Does it improve the association between bank leverage and credit risk?.The Accounting Review,88(4), 1143-1177. Bowen, R. M., Khan, U. (2014). Market reactions to policy deliberations on fair value accounting and impairment rules during the financial crisis of 20082009.Journal of Accounting and Public Policy,33(3), 233-259. Cavusgil, S. T., Knight, G., Riesenberger, J. R., Rammal, H. G., Rose, E. L. (2014).International business. Pearson Australia. Claessens, S., Van Horen, N. (2015). The impact of the global financial crisis on banking globalization.IMF Economic Review,63(4), 868-918. deJager, P. (2014). Fair value accounting, fragile bank balance sheets and crisis: A model.Accounting, Organizations and Society,39(2), 97-116. Ettredge, M. L., Xu, Y., Yi, H. S. (2014). Fair value measurements and audit fees: evidence from the banking industry.Auditing: A Journal of Practice Theory,33(3), 33-58. Goh, B. W., Li, D., Ng, J., Yong, K. O. (2015). Market pricing of banks fair value assets reported under SFAS 157 since the 2008 financial crisis.Journal of Accounting and Public Policy,34(2), 129-145. Haas, R., Lelyveld, I. (2014). Multinational banks and the global financial crisis: Weathering the perfect storm?.Journal of Money, Credit and Banking,46(s1), 333-364. Hull, J. C., White, A. (2014). Valuing derivatives: Funding value adjustments and fair value. Reddy, K. S., Nangia, V. K., Agrawal, R. (2014). The 20072008 global financial crisis, and cross-border mergers and acquisitions: A 26-nation exploratory study.Global Journal of Emerging Market Economies,6(3), 257-281. Rey, H. (2015).Dilemma not trilemma: the global financial cycle and monetary policy independence(No. w21162). National Bureau of Economic Research.