Saturday, August 31, 2019

Net Neutrality Outline

Net Neutrality Presentation 1. What is Net Neutrality? a. Net Neutrality is best defined as a network design principle. The idea is that a maximally useful public information network aspires to treat all content, sites, and platforms equally. This allows the network to carry every form of information and support every kind of application. The principle suggests that information networks are often more valuable when they are less specialized – when they are a platform for multiple uses, present and future. i. Basically what the Internet is today, an Open Network. i. The opposite of a Closed Network, where the provider determines content. b. Net Neutrality is a network design paradigm that argues for broadband network providers to be completely detached from what information is sent over their networks. c. What keeps the Internet open is Net Neutrality — the longstanding principle that preserves our right to communicate freely online. This is the definition of an open Int ernet. d. With Net Neutrality, the network's only job is to move data—not to choose which data to privilege with higher quality service. iii.Think of another open network like electric grid 1. Innovation-driving network 2. Why should you care? e. Censorship f. Blocking/ Discrimination iv. All data delivered at the same speed regardless of content 2. No preference to a particular service over another a. Think Skype over Facetime. v. Net neutrality also means that carriers can't tack on an extra cost for heavy users; everyone can stream and download as much content as they like. vi. No penalty fees attached to visiting different categories of websites.Devices share and share alike; carriers treat a smart phone no differently than a desktop. vii. A tiered Internet would also make it easier for content streams from corporate giants to rule the Web; without net neutrality, innovative startups like Craigslist and Google might not ever have seen enough traffic to get off the ground. g. Bandwidth Throttling viii. Bandwidth Throttling is the intentional slowing of Internet service by an Internet service provider. It is a reactive measure employed in communication networks in an apparent attempt to regulate network traffic and minimize bandwidth congestion. x. To help achieve this, if you use an extraordinary amount of data and fall within the top 5% of Verizon Wireless data users we may reduce your data throughput speeds periodically for the remainder of your then current and immediately following billing cycle to ensure high quality network performance for other users at locations and times of peak demand. Our proactive management of the Verizon Wireless network is designed to ensure that the remaining 95% of data customers aren't negatively affected by the inordinate data consumption of just a few users. . Digital rights and freedoms x. Telecommunication companies are merely a means to an end. In other words, they are merely the gateway to the Internet; they d on’t own the Internet themselves. i. Privacy xi. Wiretapping violation 3. Arguments Against Net Neutrality j. Enforcement xii. Who is supposed to regulate the internet? xiii. Spans across multiple countries k. Government Regulations xiv. Too much control for the government xv. Censorship 3. China xvi. Network Optimization 4. Greater good 5. % of users ruining network performance for 95% xvii. Antipiracy 6. Makes the ability to stop piracy 7. Shutting down â€Å"rouge† websites providing pirated content xviii. Special Services 8. Certain services that are need maybe should have first run at the network/ higher faster speeds 4. Conclusion l. Who owns the internet? xix. Telecommunication companies are merely a means to an end. In other words, they are merely the gateway to the Internet; they don’t own the Internet themselves.

Friday, August 30, 2019

What Set You from, Fool

After reading this article â€Å"What Set You From, Fool? † I must admit that I am confused. It was difficult to determine what the point of it was. The author expressed some clear points of the difficulties he faced growing up as a black man in Los Angeles, however the article seemed more of a story than a statement to the end. It is possible that having grown up in New York City myself, the difference in culture will be the reason for my confusion. Overall, most of the piece is awkwardly written and hard to follow. I’m not sure if this was done intentionally. The author technique of switching back and forth between Standard English as he expressed himself through intellectual thoughts and words to what seemed like forced â€Å"ebonics†. In my opinion, the article did not flow smoothly at all. I found myself having to reread and translate words to grasp the full meaning of sentences. Having to continuously do this lessened my interest in the reading. However, like the connection the author tries to make between blacks having as tough a time being accepted into the black community as whites do. He mentions a few instances where there was an entire thought process involving with certain situations that could have been disastrous for both races (the white boys greeting blacks using the word â€Å"nigga† and the author entering a store to buy St. Ides with a friend and encountering gang members). It was interesting that the author was born as a black man but until approximately middle school age, had never experienced the urban life. Apparently, before he moved to L. A. , he was surrounded by people who called him â€Å"nigger†. However, he didn’t know how to react and/or if to react, so when he arrived in LA in their school system and was called a â€Å"nigger† he immediately associated it with what they (whites) called him in Santa Monica and identified himself as well as the other kids were. In Santa Monica he was called a â€Å"Nigga† there he hadn’t associated it to anything because it was never defined to him until he arrived in LA when he heard the students refer to him as well as themselves as such. That was when he associated the word to himself and the colored people he saw there. His mom taught him that â€Å"Nigga† was a bad word and that he should not be one†¦ He finally had a reference group for the slurs and bullshit, he had tolerated for nine years not knowing what it was just knowing that he should not be one. Experiencing the life in L. A. had an obviously deep effect on him. He went from a happy go lucky kid – to a hyper-vigilant state of mind. There seemed to have been a period where his identity was vague. He was uncomfortable cruising on the edge of social circles (hanging out with white and black friends), thinking as a activist (visiting a friend who’s parents were afro-centric), until he read the autobiography of Malcolm X and seemed to finally develop his own identity. The author despised games (rituals that many kids endured amongst each other whether it was on the courts or in the streets to be apart of a set in order to survive). Whether the players are white or black (curiously no mention of Latinos), the author seemed genuinely annoyed at the thought of playing any games at all. I feel the author adopted a â€Å"can’t we all just get along† theme. Overall, this was just an ok piece not very enjoyable and very confusing; if that was the aim of the writer then he has done his job!

Thursday, August 29, 2019

Law & Ethics Essay Example | Topics and Well Written Essays - 1500 words

Law & Ethics - Essay Example According to this model, whenever a lawyer is representing a client, it does not amount to the lawyer endorsing the economic, sociopolitical or moral outlook of the client (Zacharias, n.p.). This therefore means, that a lawyer is distinct from the activities of a client even in appointed representation, and thus should not be party to the activities of the client that amount either to fraud or misconduct of the client. This way, the model requires the lawyer to act in a way that does not entangle him/her in misconduct or fraud committed by the client, and thus act as a gatekeeper who prevents such occurrences within an organization (Wan, 502). Further, the ABA Model 2004 defines and limits the scope of representation of a client by the lawyer, through stipulating that a lawyer may not represent a client or assist the client in a conduct that is deemed to be illegal or fraudulent, but that the lawyer should instead discuss and offer legal counsel to the client, regarding such matters (Zacharias, n.p.). ... an integral part of corporate governance, through defining the corporate organization as a client, and through providing for the course of action that a lawyer should take in protecting the client against adversarial intentions and activities (Zacharias, n.p.). The Model provides that whenever an organization’s lawyer knows that an employee, an officer or any other person associated with the organization is engaged in an action or is intending to engage in an action that is likely to harm an organization, the lawyer is duty-bound to act in the best interest of the organization, to prevent the occurrence f the same (Wan, 512). According to the provisions of this model, unless the lawyer reasonably believes that it will not be in the best interest of the organization, the lawyer should refer the matter to a higher authority within the organization, and if it is warranted, to another higher authority outside the organization, that will act in the best interest of the organization (Zacharias, n.p.). This way, the model places a lawyer in a gate keeping position, and requires that he lawyer should always act in the best interest that protects the client, or hinder the client from conduction fraudulent, criminal or immoral activities (Wan, 488). Question 2: The incentives framework that rentier-state theory introduced in Kuwait and how it impacted the business environment The rentier-state theory introduced a political autonomy framework of incentives in Kuwait, allowing the country to be able to discharge its internal affairs without being overly influenced by external and foreign forces, regarding the internal matters of governance and administration (Al-Zumai, 7). This incentive framework is an essential aspect for the establishment of a legal framework that works for

Wednesday, August 28, 2019

LD Research Paper Example | Topics and Well Written Essays - 1500 words

LD - Research Paper Example In reference to Goodman (20-22), a classroom is a physical environment with psychological connections. The classroom atmosphere should provide a comfortable serene area for learning in both the physical furnishing and psychological setting. Such a comfortable environment is fundamental for a 4th grade student who is young and eager to learn and explore. Goodman (23) outlines that the teacher, as the leader in the classroom, promotes community thinking among the students. Kids have one thing in common that bond them together â€Å"they are of the same age of less life experience†. This makes them think, act, learn and behave alike when together; they like to learn in groups and clubs. Ballantine et al (29) indicates that, in an ideal classroom, the excellent teacher instills community thinking into the children’s mindsets. The teachers’ communication is particularly significant to the students e.g. by saying; â€Å"In our class, we work together† the studen ts begin thinking in a broader perspective as a class and not as individuals. This is particularly essential for the junior 4th grade students who still want a feeling of connection to one another. The 4th grade children in Solomon Schechter schools acts and behave like a community in and out of the classroom. This is because the Hebrew language instills core Jewish culture in their learning and community relationship. Gurock (26) argues that the teacher connects to the students in the classroom by showing interest in the students’ lives and showing them how valuable they are to be members of the class. Through empathizing with children or encouraging them, the students feel connected not only to the tutor but also to the classroom as a whole. Warshawaky (52) outlines that, as the leader in the classroom and an example that the student should follow, the teacher must act, communicate and behave in a respectful manner in the classroom. Young students often copy what their teac her does and believe what their teacher tells them or what the teacher says (Marcus 22). The students will mirror the behavior, actions and communication techniques of their teacher. Jewish culture (in all the Jewish movements; 0rthodox, conservative or reform Jews) demands a child upbringing that is religious and that shares in the norms, believes and rituals of Judaism. A teacher of 4th grade Jewish students ought to instill respect into the students while in the classroom environment. According to Gurock (32), the Orthodox Jewish schools place more focus on religious studies of Torah and Hebrew culture. They often devote almost half of the school day for religious practices and instructions. The curriculum of most of the Orthodox Jewish schools (where all students are Jews and practice Jewish culture and prayers to the letter) promotes Judaism and religious studies. The primary responsibilities of the teachers in the Orthodox Jewish schools are to train the students in skills as well as proper religious, morals and social behavior (National Institute of Education 44). For example, the teacher should encourage students to embrace the use of phrases such â€Å"thank you†, â€Å"you are welcome†, and â€Å"excuse me† among other respectful statements. Weitherman (41-44) explains that the classroom is a democratic place where everybody’s view is respected. Fourth grade students have a mind that can

Tuesday, August 27, 2019

In the light of the global financial crisis, discuss how the Essay - 1

In the light of the global financial crisis, discuss how the remuneration of chief executives of banks should be determined - Essay Example There are also macroeconomic factors for the occurrence of the crisis that include practices in accounting and lack of transparency among others. It has also been observed that major risks or weaknesses related to financial crisis lay in the fact that financial crisis occurred due to certain pre-crisis situations which arrived in relation the supervision and regulation of various activities. A few of the micro prudential regulations were poorly structured that contributed in systematic risks. Most of the banks became solvent due to the Basel capital rules (World Bank, 2012). Moreover, the global financial crisis has enabled to underline a critical agency problem which occurred due to excessive rise in the chief executives remunerations especially during the period of 2004 to 2007 in countries such as Australia and the United States (Ariff & et. al., 2012). With this consideration, the paper intends to discuss how the remuneration of chief executives of banks should be determined in o rder to ensue that the ill-effects of the crisis can be mitigated. Discussion The global financial crisis had brought about a greater concern regarding the usage and structure of remuneration which was based on the incentive systems. The executives of the banks were observed to be yielding their benefits on the short-term visions which became apparent upon the value and stability of the organization in the long term basis. It has been observed that banks with large and small amount of compensations had undertaken risks which resulted in significant losses during the crisis situation. It has further been observed that there were various variations relating to the structure of remunerations paid to the chief executives in banks in different countries. According to a study, in around six investment banks in the US the remuneration subscribed to the executives was nearly 2% of the total compensation on an annual basis, which was much lower from the remuneration provided to executives in the European countries which was ranging from 20 to 35%. Consequently, most of the European countries were in support and adapted the framework of corporate governance relating to the concern of remuneration aspect. The concept of surpassing payment of executives has also been paid greater attention to. . In this regard, it can be observed that the issues relating to financial crisis are specified to the corporate governance relating to the separation of position of Chairman along with the CEO, as the requirements of both the personnel are quite similar in issues relating to remuneration (The World Bank Group, 2011). With regard to the remuneration of the chief executives of banks, the remuneration committee should ensure by taking the responsibility that the organizations are selecting comprehensible policies of remuneration with respect to every employee in the organization. In response to the financial crisis which occurred in 2008, it can be observed that the payment of bonuses to the executives during and after the period of global financial crisis played a pivotal role in the remuneration aspect. The executives were paid their remunerations according to their performances in most of the banks. The rescheduling or rearrangement of incentives can be paid to the employees for showing greater sustainable performances.

Monday, August 26, 2019

4 journal questions Essay Example | Topics and Well Written Essays - 500 words

4 journal questions - Essay Example When children go to school they acquire civic development. The school also plays the role of emotional development to the children. The school plays the role of cognitive, vocational and social development to the children. In the recent, the role of school has been changing as the parents take their children to school at early age. Parents take their children to school at early age in for the purpose of being taken care. Parents who go to work decide to take their children to school instead of employing house help to take care of them. School also plays the role of career development as children are told to study hard in order to get a good job (Clark 71). Economic, gender, culture and learning style factors helps the students to succeed in different ways while at school. Economics helps students to develop management skills. Economics enables students to understand how they can manage their fund and budget their income in future. Gender helps the student to develop social skill. Gender interaction makes the students to be more social and know how to interact with people despite their gender. Culture helps the students to develop interaction skills and socialization. Students learn different culture and how to interact with people from different cultural back ground and ethnic groups. Learning styles helps the student to be critical thinkers. Through the use is different learning styles, students become critical thinkers (Clark 64). Parents expect their children to learn new concepts and ideas in school. Parents expect the students to be in the hands of effective and committed staff who can not expose the children to drugs, harassment and bullying. Students also expect their children to learn democratic values of multicultural and also the society culture. Parents also expect their children to develop social competencies in school. Parents are left home believing that children learn positive things and not negative things like alcoholism and

Sunday, August 25, 2019

Exhibition Design Coursework Example | Topics and Well Written Essays - 250 words

Exhibition Design - Coursework Example At some points, the ants converge revealing the exact natural behavior of the ants while travelling or in search for food in their natural habitat. There is selection of natural lighting, use of white color for background and black color for the ants that presents a lovable piece of work. The audience gets the mouthwatering appreciation of the long streak of the path taken by the ants and the meandering path taken is the ants are excellently natural. The room equally presents consistent lighting scenario. The lighting presents a scene where all the edges are darker as compared to the inner sections of the wall. The selective lighting creates brighter, popping out the section of the arts while neglecting edges and thus presents a sense of a bigger space in the room. Labeling process for the arts is undertaken at collected points with few words elaborating of source of the art. The labeling is undertaken at a common point. While some arts can appear useless but after reading the attached label on it that the audience appreciates it. All labeling are undertaken are concentrated at certain selected points. The labeling entailed present some materials and arts to be sourced from Ivory coast which also proceeds to highlight on the original application of the communicated art message. Floors are either brown or darks

Saturday, August 24, 2019

Structure and Agency in Media and Culture Essay

Structure and Agency in Media and Culture - Essay Example The paper tells that a debate, ongoing for decades is persistent in determining the relationship between structure and agency. There is a constant struggle to bridge the structure-agency relationship void and many approaches and theories have been presented in this regard. This paper suggests that Critical Realism Theory offers a solution by presenting a practical way to encounter the problem of structure- agency relations and contexts. Whether considering the voluntary or planned actions of subjects, or micro/macro analysis of a society or individual, the debate on structure and agency relationship keeps recurring. This critical realist approach contends that structure and agency relationship must be studied in order to better understand and explain society or social actions. This ought to be accomplished to achieve a stable state of society and also to accommodate positive social change encompassing individual innovation. The Critical Theory is born with the assumption that social world needs improvement and reforms, as it is deeply flawed. This theory also refutes prediction and explanation to control the social world. The sole aim of this theory is to study the social world in order to change it for the better. It criticizes and seeks to change the imposing social order. Critical theory is political in nature as challenges and confronts the way people are ruled. It is also critical of the organizations that exercise unleashed power to obtain their goals. The theory believes that the social world is the result of interaction between structure and agency.... In order to get close to the solutions of agency-structure problem, we ought to consider the evolutionary cycles of behavior for each ontological position (Hay). The intertwined relationship between agency and structure presents many solutions in a coherent and systematic manner. Taking a different discourse to the assumption that agency and structure clearly differentiated domains and each action requires a pre-existing structure (Archer, 198), whereas the critical realist theory asserts that condition and medium of agency conduct are necessary pre-requisites for the constitution of structures. Their existence is relationally dialectical and neither can exist in isolation from the other (Hay). We are not in substantial control of the social contexts in which we live in, however it definitely requires the exercise of agency to become someone and be labeled as such (Agency Textbook, 2). The agent’s particular decisions and acts are influenced by contexts in which the decision i s made. Engaging in certain acts is a result of contextual factors and our choices are a by- product of the uncontrollable contexts as we learn to want the things in relation to the contextual surroundings. Rather than an inborn source, our dreams, aspirations, and agency are intertwined with the contextual sources (Agency Textbook, 2). The dynamics of power determine the enabling and constraint of agency as one has to demonstrate a certain amount of power to exercise agency. Power is continually negotiated and shifts frequently with a multiple, decentralized, and diffused structure (Agency Textbook, 3). The complex structure of power influences agency significantly because how power is exercised on us and how we demonstrate power,

History of Aviation Essay Example | Topics and Well Written Essays - 750 words

History of Aviation - Essay Example Joseph and Jacques Montgolfier designed a hot air balloon with the help of their father’s paper factory. This balloon flew at a height of 6500 feet, first thing to fly at such a great height. The astonishing thing about this flight was not its height but, the animals it carried. There were a goose, a rooster and a sheep in a basket carried by the balloon. These were probably the first living thing to fly at such a height proving the fact that flying is possible for a living thing. Later in the year 1783, the two brothers were finally successful in convincing two men, Pilatre de Rozier and Marquis d’Arlandes to be a part of their experiment. This time their balloon flew at a height of 300 feet. The balloon traveled 7.5 miles in less than half an hour. The flight is considered to be the first manned balloon flight in the history. It proved the safety of flying against all the odds and speculations. The flight also showed the less time-consuming side of traveling by air. Even though the first balloon flight of 1783 is still considered the most important breakthrough in aviation. The desire of man to conquer the skies was not yet fulfilled. The ballooning industry was at its top and no other way of flying but through an air balloon was considered possible. Flying was considered possible with the help of a hot air balloon. Then in the year 1853, George Cayley built a triplane glider that carried a man over 290 feet across a valley. This flight is considered as the first flight of a man in an aircraft.... Attempts were made to design wings that could enable a man to fly but all ended up in failures. Then in the year 1853 George Cayley built a triplane glider that carried a man over 290 feet across a valley. This flight is considered as the first flight of a man in an aircraft. The Wright Brothers The researches and attempts to build a machine that could carry a man were at their peak in the early 20th century. The Wright brothers were also one of those who had dedicated to the dream of the world. They taught themselves the methodologies of flying and were always trying to develop a machine that could carry a man in the air. Their work has been considered the most important in the history flying. After many successful glider test flights in the years 1901-02 the flyer was ready to fly in December 1903. The flight of the flyer acted as a major stimulant to the aviation industry and the industry was at its peak in not more than a decade when fighter planes were used in the First World Wa r. The Wright brothers clearly made the first flying machine and boomed the aviation industry. No matter what, they are still considered as the pioneers of aviation. Modern Aviation In the mid of the 29th century, there is remarkable progress of the aviation industry. International and intercontinental air travelling started. Fighter planes were being manufactured at large scale. But, the propeller driven planes were still too slow for a man who always wanted more. The quest for the best continued and jet engines were developed which has almost completely replaced propellers in the modern aviation making the aircrafts faster than sound. Development of safety guidelines have made them much safer than they were earlier. Travelling by air is much pleasant now than it

Friday, August 23, 2019

On Agency Essay Example | Topics and Well Written Essays - 1000 words

On Agency - Essay Example This is because any decision made will be on the basis that someone is watching. Agency is determined or limited by various factors of influence referred to as structure. Examples of structure include customs, religion, ethnicity, gender and social class. Panopticon is a building which has a central tower. At the periphery of Panopticon is an annular building; at the center is a tower. The tower is fitted with wide windows which open on to the inner side of the ring. The annular building is divided in to cells which extend to the whole width of the building. The cells have two windows. One window is on the inner side facing the central tower while the other is on the far end of the cell. It is strategically placed there to allow light to enter in to the cell traversing the whole cell. The Panopticon is a prison. The prisoners in the Panopticon are watched by the inspector from the central tower. However, they cannot be able to see inside the tower. This way, they never know when they are being watched or not. The prisoners therefore have to behave at all times because they feel that they are being watched even if no one is watching because they cannot tell. The only way that the prisoners counter attack the watch of the inspector is by turning their backs on him. They face the outer window leaving the inspector to watch their back. When the inspector notices this, he has to go and issue a warning to the prisoners from hiding from his watch. This situation is symbolic of how agency is first inhibited by society. Society has been structured in a manner similar to Panopticon. Just like the prisoners, someone somewhere is always watching. The problem is, you can never know who it is and where exactly they are watching you from. The two windows are symbolic of the options of power and rules, and agency. The inner opening are the rules which have been laid down to dictate the manner in which things are to be done. The outer window symbolizes the possibility of indepe ndence; agency. This can be verified by the manner in which the prisoners turn their backs on the inspector and face the outer window. Similarly, we as individuals turn our backs from what has been decided for us and seek comfort by looking for the possibility of independence. It however takes little time for society to come running to us, in many forms, reminding us that our actions and decisions should be in line with its expectations. This is one perspective of viewing the reasons why agency cannot and has not been fully realized in society. As pointed out earlier, agency is the extent to which individuals make their own free choices; alternatively, it is the ability of an individual to act on their own will. However, this ability will be limited by more than just being watched by society. Personal experiences, and individual and societal perceptions, with respect to the circumstances that an individual is in and the environment that they have been born in or are part of, form a cognitive belief structure. These beliefs will affect one’s ability to act on their own will as they often cause conflicts between parties that are involved. For example, a child who wants to be a musician and has been born in family of scholars such as engineers, will have a hard time convincing their parents. This is because the cognitive belief that the family has formed is that success can only be achieved when one finds a career in books rather than in

Thursday, August 22, 2019

National Minimum Drinking Age Act Essay Example for Free

National Minimum Drinking Age Act Essay The universal question; should the age for drinking be lowered? In my personal opinion, I believe that the drinking age should be lowered to eighteen from twenty one for various reasons. The legal drinking age is currently twenty-one, but the illegal drinking age is everything under. Why is the main question asked. Some people believe that twenty one is too high to be the minimum age to be able to legally drink and others feel that it is the perfect age. This topic is very huge and has been debated for years. The United States drinking age has gone up and down and in 1984, it went up to twenty one. Many events took place before the drinking age went up. It all began when the United States slowly tried to ban alcohol in every state for every person no matter what your age was. They did succeed. This is called Prohibition. Prohibition started in 1919 and lasted until 1933. When Prohibition started, the Constitution gained the 18th Amendment. The 18th Amendment â€Å"prohibited the manufacture, sale, transport, import, or export of alcoholic beverages†. However, this amendment was removed in 1933 by the 21th amendment, which made beer and other alcohol legal. Once Prohibition ended, each state created their own set of drinking laws. Some were twenty one; others were eighteen, and then some in between. This lasted for a few years, but then the Minimum Drinking Age Act of 1984 came along. This act forced all states to change their drinking age to twenty one or lose part of the Federal-aid highway funds. It also said the states should pass laws that helped fight drunk driving. So, the drinking ages were set to twenty one, but this can change. Prohibition and safety issues, like underground drinking, are all factors that must be considered in making this decision. Because of these factors, the national drinking age of the United States should be lowered from twenty one to eighteen. Think for a moment on how many young adult or teenagers illegally drink underage. It is a fact that more than three in four teenagers consume alcohol when they are high school seniors. A big issue for underage drinking is where the underage drinkers actually do the drinking. It is known to people that underage drinking goes on, but where and when they do it, is something to look for. Because no one wants to get in trouble for drinking, those who are underage start taking part in â€Å"underground drinking†. Underground drinking is when people under the drinking age drink alcohol without the knowledge of anyone. People will bring alcohol anywhere, like a party, and drink it without their parents knowing. The police search for underage drinking, but, even they know that once the underage drinkers are caught, they will keep doing it. The difference is that this time, they will be smarter about it and hide it better than the time before. Wed find a party where we know theres underage drinking. We would seal the house. Surround the house with officersWe wrote hundreds and hundreds of tickets those years. All we did is we pushed it further underground. (Mark Beckner, the chief of police in Boulder, Colo.) The problem with drinking without anyone knowing can be very dangerous and someone could get seriously sick from alcohol poisoning, or go completely out of control and hurt themselves. If people under the drinking age hide when they drink, they will not want to tell anyone like an adult because they do not want to get in trouble. So, if no one wants to get in trouble, then no one will tell anyone if someone gets seriously sick. There are cases in which people have died because their friends who they were drinking with were afraid of the police. As a result, it took a couple hours for anyone to say anything, and by the time they did, it was too late to really do anything. â€Å"†¦a college freshman, Gordie Bailey, who died of alcohol poisoning during a fraternity celebration. The fraternity members left him on a couch for 9 hours before someone called 911. He died because, according to Gordies parents, the other college kids were too scared to call for help because the drinking was underage.† A lesson can be learned through this event and all the others just like it. If people under twenty one were more supervised, then adults could stop those who are drinking from getting hurt or be there to make sure they get help. It is very difficult to completely stop underage drinking, but we could work against it. If the drinking age was lowered, then there can be more supervision. Young adults at the age of eighteen generally go to college where there is a lot of alcohol usage. Perhaps if the drinking age was eighteen, those underage drinkers would not hide and be controlled by the police and therefore, stay much safer. Prohibition took place in the 1850s for certain states and the 1920s for the entire country. Prohibition was the time when the whole county, every age, was banned from drinking. Prohibition was supposed to lower crime and corruption, reduce the tax burden created by prisons and poorhouses, solve social problems, and make the health and hygiene in America better. At first, alcohol consumption lowered, but as time went by, it increased once more. Of course, everything they were trying to fix or lower went higher and out of control. â€Å"Alcohol became more dangerous to consume; crime increased and became organized; the court and prison systems were stretched to the breaking point; and corruption of public officials was rampant†¦Prohibition removed a significant source of tax revenue and greatly increased government spending.† (Mark Thornton, O. P. Alford III Assistant Professor of Economics at Auburn University.†) People also did drugs because of the lack of alcohol. Just think if that didn’t take place, then these dangerous things wouldn’t have happened. However, we did learn from it. For instance, we learned that banning it didn’t work. The alcohol consumption grew during Prohibition to â€Å"about 60-70 percent of its pre Prohibition level,† then slowly dropped to 70 percent, but after Prohibition ended, the alcohol consumption went from 70 percent to 40 percent. The reason it went up is because people were protesting. Basically, the Prohibition didn’t completely stop the use of alcohol, it just made things worse. Since the alcohol is banned from those under twenty one, people under twenty one are drinking more, so moving the drinking age down to eighteen would definitely work. As always, there are those who disagree with lowering the drinking age back to eighteen. They feel that twenty one is the proper age to start to take part drinking. People feel that those underage of twenty one can’t handle alcohol and tend to not know when to stop. One reason is that they become drunk more quickly than adults and adults don’t become dunk as often. There are also facts that state how many lives have been save and how many less accidents there have been. It is true that the number of fatal car accidents have decreased by thirteen percent for those between the ages of eighteen and twenty. That decrease saved about 21,887 people between the years 1975-2002. Others believe that the Minimum Legal Drinking Age (MLDA) can hurt the student’s academic career and also cause him or her to become an alcoholic easier. There are also some people who are afraid that they will become more venerable to do certain things. For example, young adults might be more likely to become involved in drug abuse, depression, unplanned or unprotected sex, violence, and other social ills if they drink. Also, people are worried about driving because, Americans drive more than Europeans who have a drinking age of sixteen, seventeen, or eighteen. On the other side, there are those people who want and are pro lowering the drinking age to eighteen. One reason is that people won’t get much of a thrill about drinking if they are used to being able to do it. Eventually, drinking alcohol will start to get normal and will not feel as important anymore. Besides, it is all about how responsible that person is. Anyone over or under the drinking age can drinking too much and end up hurting themselves. People also must consider the fact that underage drinking does go on, and it goes on unsupervised. If the drinking age is lowered, then those who aren’t supervised can be. Prohibition and safety issues, like underground drinking are truly good reasons to consider lowering the drinking age to eighteen. This topic has truly been discussed for years. People are either for lowering it, against it, or just do not know. But there are surly plenty of facts for the pro and con sides. To make the right decision, one must look at the history. As learned from history, banning alcohol only made things worse. Then, if one looked at how underground drinking could kill people if others around are too afraid to call 911 and risk getting in trouble, they should realize that if people were more supervised and didn’t have to hide, then those unfortunate events wouldn’t happen. The choice is simple. Lowering the drinking age to eighteen can be safer. Citations G., Harold, Wyoming, and MI. Drinking Age Should Be Lowered | Teen Essay on Drugs | Teen Ink. Teen Ink | A teen literary magazine and website. N.p., n.d. Web. 27 Oct. 2011. http://teenink.com/opinion/drugs_alcohol_smoking/article/48104/Drinking-Age-Should-Be-Lowered/. Engs, Ruth C.. Why the drinking age should be lowered: An opinion based upon research . Why the drinking age should be lowered: An opinion based upon research . N.p., n.d. Web. 27 Oct. 2011. http://www.indiana.edu/~engs/articles/cqoped.html . National Youth Rights Association  » Legislative Analysis of the National Minimum Drinking Age Act. National Youth Rights Association . N.p., n.d. Web. 27 Oct. 2011. http://www.youthrights.org/research/library/legislative-analysis-of-the-national-minimum-drinking-age-act/. Should the drinking age be lowered from 21 to a younger age?. Drinking Age ProCon.org . N.p., n.d. Web. 27

Wednesday, August 21, 2019

Learning From Interprofessional Collaboration In Practice Social Work Essay

Learning From Interprofessional Collaboration In Practice Social Work Essay Interprofessional working (IPW) in health and social care is essential for effective service provision and is a key driver of modern healthcare. In a changing and more pressured working environment, health and social care professionals need to be partners in delivering services, embracing collective accountability, be flexible and adaptable and have shared goals in integrating care around service users (Fletcher 2010a, Pollard et al, 2010). According to Tope and Thomas (2007), analysis of policies from as early as 1920 in health and social care have recommended professional collaboration, improved communication and teamwork to improve outcomes for service users. There have been similar recommendations in government policy since this time (Tope and Thomas, 2007). High profile investigations since 2000 highlight deficiencies in IPW across health and social care. Inadequate communication between professionals in cases of the Bristol Royal Infirmary Inquiry (HM Government 2001), the Victoria Climbie Inquiry Report (Laming, 2003), and The Protection of Children in England: A Progress Report (Laming, 2009) have caused nationwide concern beyond the professions and services involved, causing a frenzy of media comment and public debate. Core recommendations are for professionals to improve communication between agencies, to have an ethos based around teams and working together and to improve professional accountability. The investigations provide evidence that collaborative working can only improve outcomes and underpins the real need to find out how best to develop a work force that can work together effectively (Leathard, 1994, Anderson et al, 2006 and Weinstein et al, 2003). Policy also recommends putting service users at the forefront of care and coordinating services across the authorities, voluntary and private sector organisations (DoH, 1997, DoH, 2000a, DoH, 2000b, Doh, 2001a, DoH 2001b, DoH 2001c, DoH, 2002a, DoH, 2006, DfES, 2006, HM Government 2004, HM Government 2007). Literature suggests that IPW improvements begin in interprofessional education (IPE) (DoH 2000b, DoH 2002b, Fletcher 2010a, Freeth et al 2002, Higgs and Edwards 1999, HM Government, 2007 Reynolds 2005,). IPE has been defined as learning which occurs when two or more professions learn from and about each other to improve collaboration and quality of care (CAIPE, 1997). The need to produce practitioners who are adaptable, flexible and collaborative team workers has focused attention on IPE, which aims to reduce prejudices between professional groups by bringing them together to learn with and from each other to enhance understanding of other professional roles, practice contexts and develop the skills needed for effective teamwork (Barr et al. 2005; Hammick et al. 2009, Parsell et al, 1998). At our interprofessional conference, we worked in teams of mixed student professionals. We introduced ourselves, our disciplines and our course structures, elected a chair and a scribe and set about to complete our tasks. Cooper et al (2001) identify one of the benefits of IPE as understanding other professional roles and team working. In their study, they found evidence to suggest that early learning experiences were most beneficial to develop healthy attitudes towards IPW (Cooper et al, 2001). None of the members of my group knew what a social worker did and I explained my training and professional role to them. McPherson et al (2001) describe how a lack of knowledge of the capabilities and contributions of other professions can be a barrier to IPW. In our discussions, we talked about our preconceived ideas. Social workers were described as hippies and doctors described as arrogant. Leaviss (2000) describes IPE as being effective in combating negative stereotypes before these develop and become ingrained. Atwal (2002) suggested that a lack of understanding of different professionals roles as well as a lack of awareness of the different pressures faced by different team members could make communication and decision making problematic. The conference provided an opportunity for us to interact with each other and was conducive to making positive changes in intergroup stereotypes (Barnes et al, 2000, Carpenter et al, 2003). Barr et al (1999) describe how IPE can change attitudes and counters negative stereotyping. The role play exercise gave us an understanding of differing pressures faced by each professional. Our team worked well together, taking turns to let each other speak, listening, challenging appropriately when needed and creating our sentences by the end of the conference. I feel that our friendly and motivated characters made communication and thus teamwork easy in the group. Weber and Karman (1991) found that the ability to blend different professional viewpoints in a team is a key skill for effective IPW. Pettigrew (1998) emphasises that the ability to make friends in a group of other professionals can reduce prejudice and encourage cooperation in future IPW. We agreed that teamwork was essential to IPW and can assist in the development and promotion of interprofessional communication (Opie, 1997). We felt that IPE allowed us to teach each other while encouraging reflection on our own roles (Parsal et al, 1999). We were very clear on how we worked as a group and effective as meeting our tasks and I feel we reached the Tuckmans performing stage (Tuckman 1965). Baliey (2004) describes team members who are unable to work together to share knowledge will be ineffective in practice. Although, there is an argument that this is more likely to happen in teams where the concept of IPW is new and team members lack skills to understand the benefits of IPW or adopt new ways of working (Kenny, 2002). Being in our second year of study and having all had experience of working in an interprofessional setting, we were very motivated at the conference and in achieving our objectives. It is noted that personal commitment is important for effective IPW (Pirrie et al, 1998). We acknowledged the issue of power in our professional social hierarchies. In our role play exercise, we found that we all looked to the doctors first for management of the service users treatment and they commanded the most respect. We agreed that medicine was the most established out of all the healthcare professions (Page and Meerabeau, 2004, Hafferty and Light, 1995) and that other professions have faced challenges in establishing status (Saks, 2000). I felt this was especially relevant to social workers who have recently extended their professional training to degree status to bring it in line with other professions. Reynolds (2005) suggested that hierarchies within teams could contribute to communication difficulties; for example, where input from some of the team members were not given equal value. Leathard (1994) describes that rivalry between professional groups especially in terms of perceived seniority are a barrier to IPW. The Shipman Report (2005) noted the importance of ensuring all team members are valued, recommending less hierarchy in practice, more equality among staff, regardless of their position. We talked about valuing and respecting each others professional opinion. Irvine et al, (2002) discuss how IPW can break the monopoly of any single profession in providing sole expert care, promoting shared responsibility and accountability. We discussed understanding, supporting and respecting every individual in the workplace to promote diversity and fairness. We also concluded that institutions and differing professional pressures could be a barrier to IPW. Having previously worked in an interprofessional HIV team for Swansea NHS Trust, I found that team members were given priorities from their managers which impacted on their availability to attend team meetings. Wilson and Pirrie (2000) suggest that a barrier to IPW can be a lack of support from managers and the workplace structure. Drinka et al (1996) describe how during times of work related stress, individuals can withdraw from IPW. We acknowledged that institutional support would be essential to effective IPW. Dalrymple and Burke (2006) discuss that different professionals have different priorities, values, pressures and constraints, obligations and expectations which can lead to tension, mistrust and go on to cause to discriminatory and oppressive practice in IPW. In light of the above learning, we all felt that IPW had occurred naturally in our first year placements, where it was considered the norm in our working environments and where the concept was understood and encouraged. The conference had highlighted some of the barriers to IPW and we will take this knowledge into our practice settings. Word Count 1348 Section 2 How would you take what you have learnt about IP working into practice? The conference highlighted some key issues about IPW that I will take into practice. One of the most significant developments in health and social care policy in recent years has been the move away from the professional being the expert with the power and knowledge to the patient centred care with professionals applying their knowledge to the needs and rights of the service user (Barrett et al, 2005). The social model of care identities issues of power in the traditional medical model approach to care and looks at how dependency on the professional can be a side effect of the helping relationship and be disempowering for service users (Shakespeare, 2000). Informing, consulting with and incorporating the views of service users and carers is critical to effective interagency interprofessional practise. There is a drive in recent policy for service users and carers to be engaged in service provision and the recent white paper Liberating the NHS (HM Government, 2010a), calls for more aut onomy for service users, making them more accountable through choice, being able to access services that are transparent, fair and promote power and control over decisions made. Nothing about me without me ( HM Government, 2010a, page 13) is a commitment that will shift power from professionals to service users, a huge change in current culture. The service user is the central vision, a team member involved in decisions made about their care, transforming the NHS to deliver better joined up services, partnerships and productivity (HM Government, 2010) My learning has reiterated the importance of service user involvement and I have reflected on ways to implement this in practice. In previous employment, I helped to run a patient public involvement group at the HIV service, Swansea NHS Trust. This enabled service users to give feedback and make suggestions for improvements (i.e. having evening nurse led clinics, introducing the home delivery of medication). In my experience, service users were actively involved in shaping services in their communities and it was very successful. In my practice, I will continue to value the service user as part of the interprofessional team as well as encourage this practice in my places of employment. In my placement at a supported housing charity for young mothers, ways to achieve service user involvement were being introduced. One of my roles was to carry out a questionnaire with the aim of getting feedback and empowering the service users. Reflecting on this, I can now see how valuable this exerc ise was and I will continue to see the value in gaining service user feedback and always aim to do this in practice. I discussed this with my group and this added to our learning. Informal unpaid carers, the voluntary and private sector are also essential team players and the value of their contribution is being acknowledged increasingly as the success of an interprofessional workforce (Tope and Thomas 2007). In my role within the HIV service, Swansea NHS Trust, I coordinated an interprofessional team and ran a support group for African women living with or affected by HIV in conjunction with social services and the Terrence Higgins Trust. I understand the value that the third sector organisations can be for service users, often filling gaps in statutory services. The Terrence Higgins Trust were able to provide funding for activities as well as support sessions, training opportunities and counselling. Social Care Institute for excellence (2010) in a response to the white paper, Liberating the NHS (HM Government, 2010a) discuss how around 90% of direct social care services are delivered in the private and voluntary sector. The Joseph Rowntree Foundation, a soci al policy research and development charity, discuss that the state is withdrawing from many welfare functions and increasingly relying on the voluntary sector to fill gaps in care (Joseph Rowntree Foundation, 1996). The recent strategy document, Building a Stronger Civil Society (HM Government, 2010b) discusses how integration with the voluntary sector will be essential to meet the challenges faced by the health and social care provision. The report focuses on our society being able to access wider sources of support and encourage better public sector partnerships, shifting the power from elites to local communities. The government are also keen to support and strengthen the sector and promote citizen and community action (HM Government, 2010b) . My learning has made me aware that future teams will include professionals across all sectors and communication with these sectors will be essential to our professional roles. Working with the voluntary and private sector as well as statutory services, will require skills to acknowledge different agencies focus on care. Petrie (1976) acknowledges that each profession holds a direct focus to care and it can be challenging to communicate. Laming (2003) called for the training bodies for people working in medicine, nursing, housing, schools, the police etc to demonstrate effective joint working in their training. I feel that it would be useful in the future to incorporate more of these professional groups in IPE conference. Fletcher (2010a) discussed how he would hope this could be achieved in future IPW programmes at UWE. I feel that the addition of these extra professions would really add to the learning. Fletcher (2010b) discusses the central dilemma in ethics between health and social care professionals about having a different focus and the best angle for patient care. These value differences can cause conflict (Mariano, 1999). I feel, in practice, it will be important to take time to find out what each agency/ professional does and I will always remember that in IPW, we have a common goal providing a good service for the service user. Leathard (2003) identities that what people have in common is more important than difference, as professionals acknowledge the value of sharing knowledge and expertise. In my practice, I will uphold professional responsibility and personal conduct to facilitate respect in IPW. Carr (1999) explained that the professional has to be someone who possesses, in addition to theoretical or technical expertise, a range of distinctly moral attitudes and values designed to elevate the interest and needs of service user above self interest. According to Davis and Elliston (1986), each professional field has social responsibilities within it and no one can be professional unless he or she obtains a social sensibility. Therefore, each profession must seek its own form of social good as unless there is social sensibility, professionals cannot perform their social roles (Davis Elliston, 1986). The conference highlighted the benefits of professional codes of ethics, setting of standards for our professional work, providing guidance as to our responsibilities and obligations and obtaining the status and legitimacy of professionals (Bibby, 1998). I feel that is in im portant to always uphold our values and ethics to create respect in our communities and with this comes respecting each others roles. I believe that shared values will underpin this in practice. Darlymple and Burke (2006) discuss that we have a shared concern that the work we do makes society fairer in some small way and we have a commitment to social justice. I feel that IPE has facilitated respect and mutual understanding across our professions. It has made me aware of the importance of professional development, about how we are part of the wider team of health and social care services and how our common values can underpin effective partnership working. It reinforces that collaboration is required as not one profession alone can meet all of a services (Irvine et al. 2002). My social work degree is a combination of theory and practical learning. It is through combining this learning and by reflecting on my experiences throughout the course, that will set my knowledge base, allow me to relate theory to practice, allow me to test my ideas and thinking while identifying areas that need further research becoming a reflective practitioner (Rolfe Gardner, 2006 and Schon, 1983). As a group we discussed that there we all value continued professional development, reflection and awareness and personal responsibility for our learning (Bankert and Kozel 2005). It is this that we agreed we would carry forward as we start our working careers. Word count 1352 Section 3 References Anderson, E., Manek, N., Davidson, A. (2006) Evaluation of a model for maximising interprofessional education in an acute hospital. Journal of Interprofessional Care 2 182-194 Atawl A (2002) A world apart: how occupational therapists, nurses and care managers perceive each other in acute care. British Journal of Occupational Therapy, 65(10) 446-452 Bailey, D. 2004. The Contribution of Work-based Supervision to Interprofessional Learning on a Masters Programme in Community Mental Health. Active Learning in Higher Education 5(3): 263-278 Bankert, E., G. And Kozel, V,.V (2005) Transforming pedagogy in nursing education: a caring learning environment for adult students. Nursing Education Perspectives 26 (4) 227-229 Barnes, D., Carpenter, J,. and Dickinson, C (2000) Interprofessional education for community mental health: attitudes to community care and professional sterotypes. Social work education, 565 583. Barr J, Hammick M, Koppel I and Reeves S (1999) Evaluating Interprofessional education: Two systematic reviews for health and social care, British Educational Research Journal, vol 25, no.4 533-544 Barr H, Koppel, I., Reeves S,. Hammick M, Freeth D, (2005) Effective interprofessional education, argument, assumption and evidence. Oxford: Blackwell Publishing. Barret, G., Sellman, D., Thomas, J. (2005) Interprofessional Working in Health and Social Care. Palgrave: London CAIPE (Centre for the advancement of interprofessional education) (1997) Inter-professional Education- a definition. CAIPE Bulletin no.13. Carpenter, J., Barnes, D, and Dickinson, C. (2003) The making of a modern careforce. External evaluation of the Birmingham University programme in community mental health. Durham. Centre for Applied Social Studies. Available at http://www.dur.ac.uk/resources/sass/research/ipe.pdf (accessed 24/10/10) Carr, D. (1999). Professional education and professional ethics, Journal of Applied Philosophy, 16(1), 33-46. Cooper, H; Carlisle, C; Gibbs, T; Watkins, C., 2001. Developing an evidence base for interdisciplinary learning: a systematic review. Journal of Advanced Nursing 35(2), 228-37 Dalrymple, J., Burke, B. (2006) Anti- Oppressive Practice: Social Care and the Law Berkshire: Open University Press. Davis, M., Elliston, F. (Eds.). (1986). Ethics the legal profession. New York: Prometheus Books. DfES (Department for Education and Skills (2006) The Lead Professional: Manager;s guide. Integrated working to improve outcomes for children and young people. Nottingham. DoH (Department of Health) (1997) The New NHS: Modern, Dependable, HMSO, London DoH (Department of Health) (2000a) A Health Service of all the Talents: Developing the NHS Workforce. London. DoH (Department of Health) (2000b) The NHS Plan: A Plan for Investment, A Plan for Reform. London. DOH (Department of Health) (2001a) Working Together Learning Together: a Framework for Lifelong Learning for the NHS. London. DOH (Department of Health) (2001b) Valuing people. A new strategy for learning disability in the 21st century. Stationary Office. Norwich. DoH (Department of Health) (2001c) The National Service Framework for Older people. Stationary Office, Norwich. DoH (Department of Health) (2002a) Shifting the balance of the balance of power: securing delivery. London. DoH (Department of Health) (2002b) Chronic disease management and self care national service frameworks. A practical aid to implementation in primary care. London. DoH (Department of Health) (2006) Our health, our care, our say: A new direction for community services, London Drinka, T.J.K., Miller, T.F. and Goodman, B.M. (1996) Characterizing motivational styles of professionals who work on interdisciplinary healthcare teams. Journal of Interprofessional Care 10 (1) 51-62 Fletcher, I. (2010a) Interprofessional Education, Origins, rationale and outcomes. UWE Bristol, IPE Level 2 Conference. Fletcher, I. (2010b) Ethics and Interprofesisonal Education, UWE Bristol, IPE Level 2 Conference Freeth, D., Hammick, M., Koppel, I, Reeves, S and Barr, H. (2002) A critical review of evaluations of interprofessional education. London: Higher Education Academy. Hafferty, F. and Light, D (1995) Professional dynamics and the changing nature of medical work. Journal of Health and Social Behaviour, 35. Extra Issue: forty years of medical sociology: the state of the art and directions for the future, 132-153 Hammick M, Freeth, D, Goodsman D, Copperman J. (2009) Being interprofessional. UK: Polity Press Higgs, J. and Edwards, H. (1999) Educating beginning practitioners: challenges for health professional education. Oxford: Butterworth-Heinemann   HM Government (2001) Learning from Bristol: the report of the public inquiry into childrens heart surgery at the Bristol Royal Infirmary 1984 -1995. London: HMSO http://www.bristol-inquiry.org.uk/final_report/report/index.htm (accessed 06/10/10) HM Government (2004) Every Child Matters: Change for Children 2004. London: HMSO http://www.opsi.gov.uk/Acts/acts2004/ukpga_20040031_en_1 accessed 05/10/10   HM Government (2007) Creating an Interprofessional Workforce: An education and Training Framework for Health and Social care in England. London: HMSO http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_078442.pdf (accessed 20/10/10) HM Government (2010a) Liberating the NHS Crown Copyright http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_117705.pdf (Accessed 07/10/10) HM Government (2010b) Building a stronger civil society: A strategy for voluntary and community groups, charities and social enterprises. Crown Copyright. http://www.cabinetoffice.gov.uk/media/426261/building-stronger-civil-society.pdf (accessed 15/10/10) Irvine, R., Kerridge, I., McPhee, J and Freeman, . (2002) Interprofessionalism and ethics consensus or clash of cultures? Journal of Interprofessional Care, 16:3, 199-210 Kenny G (2002) Inter-professional working: opportunities and challenges Nursing Standard 17(6): 33-35Dalrymple, J., Burke, B. (2006) Anti- Oppressive Practice: Social Care and the Law Berkshire: Open University Press. Laming, (2003) The Victoria Climbie enquiry: a report on the inquiry by Lord Laming. HMSO. London http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_4008654 (accessed 20/10/10) Laming, Lord (2009) The protection of Children in England: A progress Report. Norwich: HMSO Leathard,A. 1994, Going inter-professional: Working together for health and welfare, Routledge London and New York Leaviss, J. (2000) Exploring the perceived effect of an undergraduate multiprofessional educational intervention. Medical Education, 34 (6): 483-486 Mariano, C (1999) The case for interdisciplinary collaboration, Nurse Outlook, 37 (6), 285-288 McPherson, K., Headrock, L and Moss, F (2001) Working and learning together: good quality care depends on it, but how can we achieve it?. Quality in Health Care no.10 Supplement II 46-53 Opie, A. (1997) Thinking teams thinking clients: Issues of discourse and representation in the work of health care teams. Sociology of Health and Illness, 19, 259-280. Page, S. and Meerabeau, L. (2004) Hierarchies of evidence and hierarchies of education: reflections on a multiprofessional education initiative. Learning in Health and Social Care 3 (3) 118-218 Parsell, G., Spalding, R., Bligh, J. (1998). Shared goals, shared learning: Evaluation of a multiprofessional course for undergraduate students. Medical Education, 32, 304-311. Petrie, H. G. (1976) Do you see what I see? The epistemology of interdisciplinary inquiry. Educational Researcher. February, 9-15 Pettigrew, T. (1998). Intergroup contact theory. Annual Review of Psychology, 49, 65-85 Pirrie, A., Wilson, V., Elsegood, J, Hall, J, Hamilton, S, Harden, R, Ledd, D and Stead, J (1998) Evaluating multidisciplinary education in health care. Edinburgh SCRE Pollard, K. C., Thomas, J. and Miers, M (eds) (2010) Understanding Interprofessional Working in Health and Social Car, theory and practice. Basingstoke: Palgrave Macmillan Reynolds F (2005) Communication and clinical effectiveness in rehabilitation. Edinburgh: Elsevier Butterworth-Heinmann Rolfe, G. and Gardner, L. (2006) Do not ask who I am confession, emancipation and (self)-management through reflection. Journal of Nursing Management. 14 593-600 Saks, M. (2000) Professionalism and Health Care. In C. Davies, L. Findlay, A. Bullman (Eds.), Changing Practice in Health and Social Care. London. Sage SCIE (Social Care Institute of Excellence) (2010) response to Liberating the NHS White Paper and associated consultation papers. http://www.scie.org.uk/news/nhswhitepaper.asp Accessed 20/10/10 Schà ¶n D (1983) The reflective practitioner. Basic Books: New York Shakespeare, T (2000) Help. Birmingham, Venture Press. The Joseph Rowntree Foundation (1996) The future of the voluntary sector. Social Policy Summary. http://www.jrf.org.uk/sites/files/jrf/sp9.pdf (accessed 19/10/10) The Shipman Inquiry (2005) Fifth report safeguarding patients: lessons from the past, proposals for the future. HMSP. London Tope, R. And Thomas, E (2007) Health and Social Care Policy and the Interprofessional Agenda. A supplement to Creating an Interprofessional Workforce: an education and training framework for health and social care. hhttp://www.caipe.org.uk/resources/creating-an-interprofessional-workforce-framework/ (accessed 25/10/2010) Tuckman, B. (1965) Developmental sequence in small groups. Psychological bulletin, 63, 384-399 Wilson, V. and Pirrie, A. (2000) Multi Disciplinary Team working: Beyond the Barriers The Scottish Council for Research and Education, Edinburgh Weber, M. D., Karman, T. A. (1991). Student group approach to teaching using Tuckman Model of Group Development. American Journal of Physiology, 261, 12 16. Weinstein, J. et al, 2003, Collaboration in Social Work Practice, Jessica Kingsley Publishers

Tuesday, August 20, 2019

Programming Languages for Data Analysis

Programming Languages for Data Analysis R and Python for Data Analysis Abstract This paper discusses the comparison between the popular programming languages for Data analysis. Although there are plenty of choices in programming languages for Data science like Java, R Language, Python etc. With a whole lot of research carried out to know the strengths of these languages, we are going to discuss any two of these. Data Analytics has been the most important and trusted tool for business and markets. Data Analytics is nowadays making use of SAAS (Software As a Service). For this literature review, two popular languages (R and python) have been studied and evaluated the characteristics to decide which one will be the right language for data analysis. Both Languages shows their own strength and weakness and based on that, to understand the data based processing environments in the Distributed File Systems. Keywords-Programming language; Data analytics; R; Python, Big Data; For an industry to grow in a market is not an easy task. With the help of Data Analytics, it can grow bigger and better. It can help to deliver quick corporate results and a value to business. The major challenge with the data is to process it and then make decisions worth value. Data Crunching requires proper tools and powerful analysis. Out of all languages, we choose two popular language i.e R language and Python for data analysis. We are going to discuss the need of using a programming language in Data Analysis and list some of the characteristics of these two languages. In the end, we will conclude which language performs and delivers in the field of Data Analysis. While carrying out research in Data Analytics, we came across multiple programming languages apart from R and Python which are described below- Julia Not a well-recognized language but hackers surely talk of Julia. It is said to be faster than R upgradable than Python. [5] Java In comparison to R and Python, Java seems less capable in terms of Data Visualization but can be the first choice for the prototype of the statistical system. [6] MATLAB Became popular and was used before the release of python and R. To be good fit as a programming language we should consider different aspects of data analysis. For this review purpose we will broadly classify them as follow- Collection of Raw Data Data is available in variety of format. Programming languages were evaluated in terms of support for various data formats and efficiency in handling them. Data processing Once imported into program, datasets might require cleansing in terms of missing values, unrelated or redundant data values etc. Capabilities to deal with such data were evaluated for programming languages Data Exploration Simplicity of applying commonly used statistical methods like grouping, pattern recognition, switching and sorting is evaluated for programming languages. Data Analysis Availability of special purpose in-built functions and various methods of machine learning and deep analysis are used as evaluation measures. Data Visualization Visualization is important aspect of data analytics. Visualization capabilities of programming languages were evaluated on the basis of ease of creation, simplicity and sharing in various formats. In addition to these capabilities we will discuss a bit about history and accolades of every programming language. We will also discuss popular choices for IDE (Integrated Development Environment) for these1 language. Introduced in 1995, by Ross Ihaka and Robert Gentleman, R is implementation of S programming language (Bell Labs). Latest version is 3.1.3 which was released in March, 2015. Rs architectural design and evolution is maintained by R-foundation and R-Core Group. [1] Rs software environment is written primarily in C, FORTRAN, and R. RStudio is very popular IDE used to perform data analysis using R. Primary used for academic research, R is rapidly expanding into enterprise market. [1] A. Collection of Raw Data You can Import data from variety of formats like excel, CSV, and from text files. DataFrames, primary data structure in R, can import files from SPSS or MiniTab. Basically R can handle data from most common sources without glitch. Where R is not so great at is data collection from web. Lot of work is being carried to address this limitation. To name few, Rvest package will perform basic web-scraping while magrittr will parse the information on webpages. [1][3] B. Data Processing It is very easy to reshape dataframe in R. Tasks like adding new columns, populating missing values etc. can be done with just one line of code. Many new packages like reshape2 allow users to manipulate data frames to fit the criteria set per requirements. [3] C. Data Exploration R is built by statisticians. For exploratory work its easy for beginners. Many models can be written with very few lines of codes. With R, users will be able to build probability distributions and apply statistical methods for machine learning. For advance work in analytics, optimization and analysis, users may have to rely on third party packages. [3] Many popular packages like zoo (to work with time-series), caret (machine learning) represent strength of R. Python is loosely bind programming language with very wide user base. D. Data Visualization Visualization is strong forte of R. R was built to perform statistical analysis and demonstrate the results. By default, R allows you to make basic charts and plot graphs which can be saved in variety of formats like jpeg or PDFs. With advance packages like ggvis, lattice and ggplot2 user can extend data visualization capabilities of R program. [1][3] Created by Guido Van Rossum in 1991, Python is inspired by C, Modula-3 and in-perticular ABC. Python software foundation (PSF) is curator for Python language. Current version is 3.4.3/2.7.9 released in Feb 2015/Dec 2014. Python has been popular choice for programmer to build web and multitier applications. In context of data analytics, Python is majorly use by programmers to apply statistical techniques. Coding in python is easy because of nice syntax. [4] IPython Notebook and ANACONDA are popular IDEs used for data analysis using Python. A. Collection of Raw Data In addition to excel, CSV and text data, python also supports JASON and semi-structured data formats like XML and YAML. Using certain libraries, users can import SQL tables into python program [4] Python Request Library facilitates web scrapping, where user can get data from websites to analyze in depth. [2] B. Data Processing To uncover underlying information, Pandas library of python comes handy. Like R, data is held in DataFrames which can be used and reused throughout program without hampering performance. [2] Users can apply standard methods of cleaning data or process data to fill out incompelete information just like R. C. Data Exploration Pandas is very powerful library. Users will be able to group by datavalues and sort them according to timeseries. Comlex grouping clauses like time-series analysis to seconds can be performed on dataframes in python program. D. Data Visualization Using MetaPlotlib [2] library, user can plot basic graphs and chrats from available data-points. For advance visulization, Plot.ly can be used, which is another python library. Users can use powerful IDEs like Anaconda or IPython Notebook to create powerful visualization and convert them into various formats like HTML. In addition to their differences, there are few common positives about both Python and R which make them so popular among data analysts and statisticians. R and Python are distributed under open license which make them free to download and modify per users need. In contrast to other programming tools, like SAS and SPSS, which come with hefty price tag. Being open source, many advancements in statistics will come to python and R first.[6] Both of them are widely loved and supported by big community of statisticians and developers. [6] IDE like IPython Notebook will consolidate your datasets in one file, thereby simplifies your workflow.[2] R has rich ecosystem of cutting edge packages to string your work together which proves useful in particular to Data Analysis.[3] Python is more of general purpose language. Its easy and intuitive, therefor it has simplified learning curve. Pythons testing framework guaranties reusability and reliability of code. R is language developed by statisticians for statisticians while python is easier to learn general purpose programming language.[3] Working through research in programming languages for data analytics, there are many other options which are listed below- Julia Though not yet widely recognized, data hackers talk fondly of Julia. It is regarded as faster than R and more scalable than Python.[5] Java Although java is not as capable as python and R in terms of visualization, it can be primary choice to build prototype for statistical system. [6] KAFKA Developed by linked-in, KAFKA is highly regarded for its real-time analytics capabilities.[6] STORM Storm is framework written in SCALA which saw recent tides of popularity in Silicon Valley MATLAB Excel Used by many statisticians before outburst of python and R. Special thanks to Prof. Oisin Creaner, for presenting this opportunity to dig out for various options available for programming in Data Analytics Ihaka, R. and Gentleman, R., 1996. R: a language for data analysis and graphics. Journal of computational and graphical statistics, 5(3), pp.299-314. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V. and Vanderplas, J., 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12, pp.2825-2830.. Nasridinov, A. and Park, Y.H., 2013, September. Visual Analytics for Big Data Using R. In Cloud and Green Computing (CGC), 2013 Third International Conference on (pp. 564-565). IEEE. Sanner, M.F., 1999. Python: a programming language for software integration and development. J Mol Graph Model, 17(1), pp.57-61. Bezanson, J., Karpinski, S., Shah, V.B. and Edelman, A., 2012. Julia: A fast dynamic language for technical computing. arXiv preprint arXiv:1209.5145. Fan, W. and Bifet, A., 2013. Mining big data: current status, and forecast to the future. ACM sIGKDD Explorations Newsletter, 14(2), pp.1-5.

Monday, August 19, 2019

The Perils of Obedience by Stanley Milgram Essay -- Stanley Milgram T

â€Å"The Perils of Obedience† was written by Stanley Milgram in 1974. In the essay he describes his experiments on obedience to authority. I feel as though this is a great psychology essay and will be used in psychology 101 classes for generations to come. The essay describes how people are willing to do almost anything that they are told no matter how immoral the action is or how much pain it may cause.   Ã‚  Ã‚  Ã‚  Ã‚  This essay even though it was written in 1974 is still used today because of its historical importance. The experiment attempts to figure out why the Nazi’s followed Hitler. Even though what he told them to do was morally wrong and they did it anyway. If this essay can help figure out why Hitler was able to do what he was then able to do, then maybe psychologists can figure out how to prevent something like that from happening again.   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"The Perils of Obedience† is about an experiment that was made to test the obedience of ordinary people. There are two people who come and perform in the lab, one is the subject or the teacher and the other is an actor or the learner. The teacher doesn’t know that the learner is an actor. They are there to see how far someone would go on causing someone pain just because they were told to do so the authority figure. The learner is given a list of word pairs and has to memorize them. Then he has to remember the second word of the pair when he hears the first word. If he is incorrect the â€Å"teacher† will shock him until he gets it rig...

Sunday, August 18, 2019

The Vietnam War: The Tet Offensive :: Vietnam War Essays

In the mid to late 1960's the Vietnam Conflict was greatly controversial. This is mainly due to the fact that it was an undeclared war and was being fought with unclear objectives. It was fought mainly by Viet Cong guerillas and the NVA from the North and by the USA and ARVN from the south. Throughout the conflict it appeared as if the South was prevailing; up until one climatic battle that turned out to be a failure militarily; it is known as the Tet Offensive. The Tet Offensive started with diversionary attacks on Khe Sanh on January 21. It began with a concentrated artillery barrage and entrenching troops around the perimeter so that they could prepare further assaults on Khe Sanh's defenses. This caused the US to move their troops up in order to defend against the enemy intrusion. By causing the US to move their troops from their positions in the other major cities, the North had created an opportunity for an attack on all of those cites. Then next step of the plan was to infiltrate the major cities of the South like Saigon and Hue with VC and NVA soldiers. It is amazing how effectively the VC and NVA snuck their soldiers into the cities, because only a small number of them actually got caught. They pulled this off by sending their men in slowly, mostly by twos or threes, disguised as refugees, peasants, workers, and ARVN soldiers on holiday. Their weapons were smuggled in separately in flower carts, coffins, and trucks that looked as if they were filled with food for the civilians. All-in-all the amount of troops in these cities equaled about 5 battalions. Once the North had accomplished its goals of distracting the United States soldiers and infiltrating its guarded cities they decided to attack; they chose a day designated for truce, the Vietnamese New Year of Tet. On January 31st, in the early hours of the morning, the NVA and VC troops and commandoes began the Tet Offensive by attacking virtually every major city and town; including most of the major bases and airfields. Most of the attacks made came by total surprise and caused a maelstrom of chaos among the US soldiers who attempted to defend their posts. An example of one of these attacks is the one launched against the US embassy in Saigon. In Saigon, nineteen VC commandoes attempted to blast their way through the main doors of the US embassy and killed two of the 5 MP's on duty.

Baron Von Steuben :: essays research papers

The Prussian Baron von Steuben, being a newcomer to the Revolutionary cause in America, was in a position to see many of the deficiencies in military discipline and their causes. The reasons for his unique insight may have been due to the fact that he was distanced from the revolutionary ideals in America, and as a result, was able to better observe and understand them; and ultimately use them to shape his new and successful form of discipline in the Continental Army. Most of the commanders of the Continental Army, from the commander in chief to the lower officers had subscribed to the traditional European method that relied on fear to achieve discipline. This method of fear was probably not essential, and had little if any effect in the early days of the war because the soldiers were mostly fighting for their own ideologies. To the soldiers, the commanders were of little importance. The soldiers were going to fight their own fight, and leave the battle when they felt it necessary. The soldier saw himself as a volunteer, a citizen fighting in a group of citizens, and as a result did not respond well to the traditional forms of discipline. The soldier knew it wasnÕt necessary for him to serve, and he knew that he would not be looked down upon for not serving or leaving the army by his fellow revolutionaries. He had the freedom to chose how he wished to serve the revolution, and military service was not an obligation. One aspect of the traditional European system that Baron von Steuben felt needed change was the relationship between the officers and the soldiers. Officers in the Continental Army felt it was necessary to distance themselves from the common soldiers, as an officer had an obligation as a gentleman as well. This division was along social lines, and by separation, the officers felt the common soldiers would show even greater respect. Royster describes this accurately by saying that the officers tried Òto make themselves haughty objects of the soldiersÕ awe.Ó (215) Steuben did several things to put the officers and the soldiers on common ground. First, sergeants were no longer to do the training and drilling of soldiers. Officers were encouraged to train, drill, and march with their soldiers. They were also encouraged to eat with the common soldiers as well, whenever possible. The officers needed to show love of the soldiers to earn their respect, and in doing this the officers needed to set themselves as an example to the soldiers by overachieving, rather than distancing themselves and underachieving in the eyes of the soldier. Before Steuben arrived, the forms of drills, training, and discipline in the

Saturday, August 17, 2019

Achieving Fault-Tolerance in Operating System Essay

Introduction Fault-tolerant computing is the art and science of building computing systems that continue to operate satisfactorily in the presence of faults. A fault-tolerant system may be able to tolerate one or more fault-types including – i) transient, intermittent or permanent hardware faults, ii) software and hardware design errors, iii) operator errors, or iv) externally induced upsets or physical damage. An extensive methodology has been developed in this field over the past thirty years, and a number of fault-tolerant machines have been developed – most dealing with random hardware faults, while a smaller number deal with software, design and operator faults to varying degrees. A large amount of supporting research has been reported. Fault tolerance and dependable systems research covers a wide spectrum of applications ranging across embedded real-time systems, commercial transaction systems, transportation systems, and military/space systems – to name a few. The supporting research includes system architecture, design techniques, coding theory, testing, validation, proof of correctness, modelling, software reliability, operating systems, parallel processing, and real-time processing. These areas often involve widely diverse core expertise ranging from formal logic, mathematics of stochastic modelling, graph theory, hardware design and software engineering. Recent developments include the adaptation of existing fault-tolerance techniques to RAID disks where information is striped across several disks to improve bandwidth and a redundant disk is used to hold encoded information so that data can be reconstructed if a disk fails. Another area is the use of application-based fault-tolerance techniques to detect errors in high performance parallel processors. Fault-tolerance techniques are expected to become increasingly important in deep sub-micron VLSI devices to combat increasing noise problems and improve yield by tolerating defects that are likely to occur on very large, complex chips. Fault-tolerant computing already plays a major role in process control, transportation, electronic commerce, space, communications and many other areas that impact our lives. Many of its next advances will occur when applied to new state-of-the-art systems such as massively parallel scalable computing, promising new unconventional architectures such as processor-in-memory or reconfigurable computing, mobile computing, and the other exciting new things that lie around the corner. Basic Concepts Hardware Fault-Tolerance – The majority of fault-tolerant designs have been directed toward building computers that automatically recover from random faults occurring in hardware components. The techniques employed to do this generally involve partitioning a computing system into modules that act as fault-containment regions. Each module is backed up with protective redundancy so that, if the module fails, others can assume its function. Special mechanisms are added to detect errors and implement recovery. Two general approaches to hardware fault recovery have been used: 1) fault masking, and 2) dynamic recovery. Fault masking is a structural redundancy technique that completely masks faults within a set of redundant modules. A number of identical modules execute the same functions, and their outputs are voted to remove errors created by a faulty module. Triple modular redundancy (TMR) is a commonly used form of fault masking in which the circuitry is triplicated and voted. The voting circuitry can also be triplicated so that individual voter failures can also be corrected by the voting process. A TMR system fails whenever two modules in a redundant triplet create errors so that the vote is no longer valid. Hybrid redundancy is an extension of TMR in which the triplicated modules are backed up with additional spares, which are used to replace faulty modules -allowing more faults to be tolerated. Voted systems require more than three times as much hardware as non-redundant systems, but they have the advantage that computations can continue without interruption when a fault occurs, allowing existing operating systems to be used. Dynamic recovery is required when only one copy of a computation is running at a time (or in some cases two unchecked copies), and it involves automated self-repair. As in fault masking, the computing system is partitioned into modules backed up by spares as protective redundancy. In the case of dynamic recovery however, special mechanisms are required to detect faults in the modules, switch out a faulty module, switch in a spare, and instigate those software actions (rollback, initialization, retry, and restart) necessary to restore and continue the computation. In single computers special hardware is required along with software to do this, while in multicomputers the function is often managed by the other processors. Dynamic recovery is generally more hardware-efficient than voted systems, and it is therefore the approach of choice in resource-constrained (e.g., low-power) systems, and especially in high performance scalable systems in which the amount of hardware resources devoted to active computing must be maximized. Its disadvantage is that computational delays occur during fault recovery, fault coverage is often lower, and specialized operating systems may be required. Software Fault-Tolerance – Efforts to attain software that can tolerate software design faults (programming errors) have made use of static and dynamic redundancy approaches similar to those used for hardware faults. One such approach, N-version programming, uses static redundancy in the form of independently written programs (versions) that perform the same functions, and their outputs are voted at special checkpoints. Here, of course, the data being voted may not be exactly the same, and a criterion must be used to identify and reject faulty versions and to determine a consistent value (through inexact voting) that all good versions can use. An alternative dynamic approach is based on the concept of recovery blocks. Programs are partitioned into blocks and acceptance tests are executed after each block. If an acceptance test fails, a redundant code block is executed. An approach called design diversity combines hardware and software fault-tolerance by implementing a fault-tolerant computer system using different hardware and software in redundant channels. Each channel is designed to provide the same function, and a method is provided to identify if one channel deviates unacceptably from the others. The goal is to tolerate both hardware and software design faults. This is a very expensive technique, but it is used in very critical aircraft control applications. The key technologies that make software fault-tolerant Software involves a system’s conceptual model, which is easier than a physical model to engineer to test for things that violate basic concepts. To the extent that a software system can evaluate its own performance and correctness, it can be made fault-tolerant—or at least error aware; to the extent that a software system can check its responses before activating any physical components, a mechanism for improving error detection, fault tolerance, and safety exists. We can use three key technologies—design diversity, checkpointing, and exception handling—for software fault tolerance, depending on whether the current task should be continued or can be lost while avoiding error propagation (ensuring error containment and thus avoiding total system failure). Tolerating solid software faults for task continuity requires diversity, while checkpointing tolerates soft software faults for task continuity. Exception handling avoids system failure at the expense of current task loss. Runtime failure detection is often accomplished through an acceptance test or comparison of results from a combination of â€Å"different† but functionally equivalent system alternates, components, versions, or variants. However, other techniques— ranging from mathematical consistency checking to error coding to data diversity—are also useful. There are many options for effective system recovery after a problem has been detected. They range from complete rejuvenation (for example, stopping with a full data and software reload and then restarting) to dynamic forward error correction to partial state rollback and restart. The relationship between software fault tolerance and software safety Both require good error detection, but the response to errors is what differentiates the two approaches. Fault tolerance implies that the software system can recover from —or in some way tolerate—the error and continue correct operation. Safety implies that the system either continues correct operation or fails in a safe manner. A safe failure is an inability to tolerate the fault. So, we can have low fault tolerance and high safety by safely shutting down a system in response to every detected error. It is certainly not a simple relationship. Software fault tolerance is related to reliability, and a system can certainly be reliable and unsafe or unreliable and safe as well as the more usual combinations. Safety is intimately associated with the system’s capacity to do harm. Fault tolerance is a very different property. Fault tolerance is—together with fault prevention, fault removal, and fault forecasting— a means for ensuring that the system function is implemented so that the dependability attributes, which include safety and availability, satisfy the users’ expectations and requirements. Safety involves the notion of controlled failures: if the system fails, the failure should have no catastrophic consequence—that is, the system should be fail-safe. Controlling failures always include some forms of fault tolerance—from error detection and halting to complete system recovery after component failure. The system function and environment dictate, through the requirements in terms of service continuity, the extent of fault tolerance required. You can have a safe system that has little fault tolerance in it. When the system specifications properly and adequately define safety, then a well-designed fault-tolerant system will also be safe. However, you can also have a system that is highly fault tolerant but that can fail in an unsafe way. Hence, fault tolerance and safety are not synonymous. Safety is concerned with failures (of any nature) that can harm the user; fault tolerance is primarily concerned with runtime prevention of failures in any shape or form (including prevention of safety critical failures). A fault-tolerant and safe system will minimize overall failures and ensure that when a failure occurs, it is a safe failure. Several standards for safety-critical applications recommend fault tolerance—for hardware as well as for software. For example, the IEC 61508 standard (which is generic and application sector independent) recommends among other techniques: â€Å"failure assertion programming, safety bag technique, diverse programming, backward and forward recovery.† Also, the Defense standard (MOD 00-55), the avionics standard (DO-178B), and the standard for space projects (ECSS-Q-40- A) list design diversity as possible means for improving safety. Usually, the requirement is not so much for fault tolerance (by itself) as it is for high availability, reliability, and safety. Hence, IEEE, FAA, FCC, DOE, and other standards and regulations appropriate for reliable computer-based systems apply. We can achieve high availability, reliability, and safety in different ways. They involve a proper reliable and safe design, proper safeguards, and proper implementation. Fault tolerance is just one of the techniques that assure that a system’s quality of service (in a broader sense) meets user needs (such as high safety). History The SAPO computer built in Prague, Czechoslovakia was probably the first fault-tolerant computer. It was built in 1950–1954 under the supervision of A. Svoboda, using relays and a magnetic drum memory. The processor used triplication and voting (TMR), and the memory implemented error detection with automatic retries when an error was detected. A second machine developed by the same group (EPOS) also contained comprehensive fault-tolerance features. The fault-tolerant features of these machines were motivated by the local unavailability of reliable components and a high probability of reprisals by the ruling authorities should the machine fail. Over the past 30 years, a number of fault-tolerant computers have been developed that fall into three general types: (1) long-life, un-maintainable computers, (2) ultra dependable, real-time computers, and (3) high-availability computers. Long-Life, Unmaintained Computers Applications such as spacecraft require computers to operate for long periods of time without external repair. Typical requirements are a probability of 95% that the computer will operate correctly for 5–10 years. Machines of this type must use hardware in a very efficient fashion, and they are typically constrained to low power, weight, and volume. Therefore, it is not surprising that NASA was an early sponsor of fault-tolerant computing. In the 1960s, the first fault-tolerant machine to be developed and flown was the on-board computer for the Orbiting Astronomical Observatory (OAO), which used fault masking at the component (transistor) level. The JPL Self-Testing-and-Repairing (STAR) computer was the next fault-tolerant computer, developed by NASA in the late 1960s for a 10-year mission to the outer planets. The STAR computer, designed under the leadership of A. Avizienis was the first computer to employ dynamic recovery throughout its design. Various modules of the computer were instrumented to detect internal faults and signal fault conditions to a special test and repair processor that effected reconfiguration and recovery. An experimental version of the STAR was implemented in the laboratory and its fault tolerance properties were verified by experimental testing. Perhaps the most successful long-life space application has been the JPL-Voyager computers that have now operated in space for 20 years. This system used dynamic redundancy in which pairs of redundant computers checked each-other by exchanging messages, and if a computer failed, its partner could take over the computations. This type of design has been used on several subsequent spacecraft. Ultra-dependable Real-Time Computers These are computers for which an error or delay can prove to be catastrophic. They are designed for applications such as control of aircraft, mass transportation systems, and nuclear power plants. The applications justify massive investments in redundant hardware, software, and testing. One of the first operational machines of this type was the Saturn V guidance computer, developed in the 1960s. It contained a TMR processor and duplicated memories (each using internal error detection). Processor errors were masked by voting, and a memory error was circumvented by reading from the other memory. The next machine of this type was the Space Shuttle computer. It was a rather ad-hoc design that used four computers that executed the same programs and were voted. A fifth, non-redundant computer was included with different programs in case a software error was encountered. During the 1970s, two influential fault-tolerant machines were developed by NASA for fuel-efficient aircraft that require continuous computer control in flight. They were designed to meet the most stringent reliability requirements of any computer to that time. Both machines employed hybrid redundancy. The first, designated Software Implemented Fault Tolerance (SIFT), was developed by SRI International. It used off-the-shelf computers and achieved voting and reconfiguration primarily through software. The second machine, the Fault-Tolerant Multiprocessor (FTMP), developed by the C. S. Draper Laboratory, used specialized hardware to effect error and fault recovery. A commercial company, August Systems, was a spin-off from the SIFT program. It has developed a TMR system intended for process control applications. The FTMP has evolved into the Fault-Tolerant Processor (FTP), used by Draper in several applications and the Fault-Tolerant Parallel processor (FTPP) – a parallel processor that allows processes to run in a single machine or in duplex, tripled or quadrupled groups of processors. This highly innovative design is fully Byzantine resilient and allows multiple groups of redundant processors to be interconnected to form scalable systems. The new generation of fly-by-wire aircraft exhibits a very high degree of fault-tolerance in their real-time flight control computers. For example the Airbus Airliners use redundant channels with different processors and diverse software to protect against design errors as well as hardware faults. Other areas where fault-tolerance is being used include control of public transportation systems and the distributed computer systems now being incorporated in automobiles. High-Availability Computers Many applications require very high availability but can tolerate an occasional error or very short delays (on the order of a few seconds), while error recovery is taking place. Hardware designs for these systems are often considerably less expensive than those used for ultra-dependable real-time computers. Computers of this type often use duplex designs. Example applications are telephone switching and transaction processing. The most widely used fault-tolerant computer systems developed during the 1960s were in electronic switching systems (ESS) that are used in telephone switching offices throughout the country. The first of these AT&T machines, No. 1 ESS, had a goal of no more than two hours downtime in 40 years. The computers are duplicated, to detect errors, with some dedicated hardware and extensive software used to identify faults and effect replacement. These machines have since evolved over several generations to No. 5 ESS which uses a distributed system controlled by the 3B20D fault tolerant computer. The largest commercial success in fault-tolerant computing has been in the area of transaction processing for banks, airline reservations, etc. Tandem Computers, Inc. was the first major producer and is the current leader in this market. The design approach is a distributed system using a sophisticated form of duplication. For each running process, there is a backup process running on a different computer. The primary process is responsible for checkpointing its state to duplex disks. If it should fail, the backup process can restart from the last checkpoint. Stratus Computer has become another major producer of fault-tolerant machines for high-availability applications. Their approach uses duplex self-checking computers where each computer of a duplex pair is itself internally duplicated and compared to provide high-coverage concurrent error detection. The duplex pair of self-checking computers is run synchronously so that if one fails, the other can continue the computations without delay. Finally, the venerable IBM mainframe series, which evolved from S360, has always used extensive fault-tolerance techniques of internal checking, instruction retries and automatic switching of redundant units to provide very high availability. The newest CMOS-VLSI version, G4, uses coding on registers and on-chip duplication for error detection and it contains redundant processors, memories, I/O modules and power supplies to recover from hardware faults – providing very high levels of dependability. The server market represents a new and rapidly growing market for fault-tolerant machines driven by the growth of the Internet and local networks and their needs for uninterrupted service. Many major server manufacturers offer systems that contain redundant processors, disks and power supplies, and automatically switch to backups if a failure is detected. Examples are SUN’s ft-SPARC and the HP/Stratus Continuum 400. Other vendors are working on fault-tolerant cluster technology, where other machines in a network can take over the tasks of a failed machine. An example is the Microsoft MSCS technology. Information on fault-tolerant servers can readily be found in the various manufacturers’ web pages. Conclusion Fault-tolerance is achieved by applying a set of analysis and design techniques to create systems with dramatically improved dependability. As new technologies are developed and new applications arise, new fault-tolerance approaches are also needed. In the early days of fault-tolerant computing, it was possible to craft specific hardware and software solutions from the ground up, but now chips contain complex, highly-integrated functions, and hardware and software must be crafted to meet a variety of standards to be economically viable. Thus a great deal of current research focuses on implementing fault tolerance using COTS (Commercial-Off-The-Shelf) technology. References Avizienis, A., et al., (Ed.). (1987):Dependable Computing and Fault-Tolerant Systems Vol. 1: The Evolution of Fault-Tolerant Computing, Vienna: Springer-Verlag. (Though somewhat dated, the best historical reference available.) Harper, R., Lala, J. and Deyst, J. (1988): â€Å"Fault-Tolerant Parallel Processor Architectural Overview,† Proc of the 18st International Symposium on Fault-Tolerant Computing FTCS-18, Tokyo, June 1988. (FTPP) 1990. Computer (Special Issue on Fault-Tolerant Computing) 23, 7 (July). Lala, J., et. al., (1991): The Draper Approach to Ultra Reliable Real-Time Systems, Computer, May 1991. Jewett, D., A (1991): Fault-Tolerant Unix Platform, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991 (Tandem Computers) Webber, S, and Jeirne, J.(1991): The Stratus Architecture, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991. Briere, D., and Traverse, P. (1993): AIRBUS A320/ A330/A340 Electrical Flight Controls: A Family of Fault-Tolerant Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Sanders, W., and Obal, W. D. II, (1993): Dependability Evaluation using UltraSAN, Software Demonstration in Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Beounes, C., et. al. (1993): SURF-2: A Program For Dependability Evaluation Of Complex Hardware And Software Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Blum, A., et. al., Modeling and Analysis of System Dependability Using the System Availability Estimator, Proc of the 24th International Symposium on Fault-Tolerant Computing, FTCS-24, Austin TX, June 1994. (SAVE) Lala, J.H. Harper, R.E. (1994): Architectural Principles for Safety-Critical Real-Time Applications, Proc. IEEE, V82 n1, Jan 1994, pp25-40. Jenn, E. , Arlat, J. Rimen, M., Ohlsson, J. and Karlsson, J. (1994): Fault injection into VHDL models:the MEFISTO tool, Proc. Of the 24th Annual International Symposium on Fault-Tolerant Computing (FTCS-24), Austin, Texas, June 1994. Siewiorek, D., ed., (1995): Fault-Tolerant Computing Highlights from 25 Years, Special Volume of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Papers selected as especially significant in the first 25 years of Fault-Tolerant Computing.) Baker, W.E, Horst, R.W., Sonnier, D.P., and W.J. Watson, (1995): A Flexible ServerNet-Based Fault-Tolerant Architecture, Pr oc of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Tandem) Timothy, K. Tsai and Ravishankar K. Iyer, (1996): â€Å"An Approach Towards Benchmarking of Fault-Tolerant Commercial Systems,† Proc. 26th Symposium on Fault-Tolerant Computing FTCS-26, Sendai, Japan, June 1996. (FTAPE) Kropp Nathan P., Philip J. Koopman, Daniel P. Siewiorek(1998):, Automated Robustness Testing of Off-the-Shelf Software Components, Proc of the 28th International Symposium on Fault-Tolerant Computing , FTCS’28, Munich, June, 1998. (Ballista). Spainhower, l., and T.A.Gregg, (1998):G4: A Fault-Tolerant CMOS Mainframe Proc of the 28th International Symposium on Fault-Tolerant Computing FTCS-28, Munich, June 1998. (IBM). Kozyrakis, Christoforos E., and David Patterson, A New Direction for Computer Architecture Research, Computer, Vol. 31, No. 11, November 1998.