Featured Post

Strategic Management The case of Coca

Saturday, August 22, 2020

Fortino Tafoya Essays (991 words) - Learning, Education,

Fortino Tafoya CURR 119 Prof. Won-Jones 7/8/18 5 Ways to Use Music, Movement and Manipulatives In My High School ELA Class 1.) My preferred method to stir up the typical daily schedule of direct guidance and maintain a strategic distance from the long talk is to utilize music in the study hall. Either as something to calm understudies minds out of sight while I talk or as a point of convergence of an exercise, music is an energy of mine that I would like to move over to my study hall. My first case of utilizing music in my ELA class is to make a rhyming discussion exercise plan where understudies are given a straightforward rap beat and a rhyme structure to make lines of stanza that speak to data about a theme that they have investigated. This would permit gathering of understudies ranges of abilities to complete the undertaking with a rap fight occurring rather than a common discussion. Understudies will rhyme over music and present their hotspots for the data that manufactured their sentiments. 2.) In my experience, one of the most cumbersome difficulties for some, English instructors is discovering approaches to infuse development into their homeroom. Basically having a second to stand up and stretch is fine, however I need to fuse development into games that assist understudies with auditing material and become more acquainted with one another better. This is a two-winged animals with-one-stone way to deal with utilizing games and development in your study hall. My first model is designated Name 6, where we pass an item around the room and an individual in the middle applauds to stop the passing. The individual holding the article is given the point before the round beginnings and must name 6 (or 4 or 5) things or instances of that subject. That individual passes the item again and has until the article is passed back to them to offer the models or responses. A short time later they go in the center for the following round/point. I adored how this game uses cooperation, a udit, talking, and energy to get understudies up and learning. 3.) As understudies become more established the utilization of manipulatives appears to extraordinarily reduce, nonetheless, the adequacy of them doesn't appear to lose its quality, even as an understudy develops into secondary school. Clearly, manipulatives can be unreasonably basic for more seasoned understudies however on the off chance that the educator can be resourseful or inventive they can make new ones that mirror the understudies learning premiums, or be all the more true appropriate. One case of a secondary school level manipulative is a game I as of late found at a second hand shop called Word Teasers.' This game comes in a few aptitude levels and with various themes. The possibility of the game is to begin a conversation based around a language expressions theme. The game I purchased was Clever Sayings and it showed understudies basic maxims in English with a meaning of the figure of speech just as a recommendation or inquiry to pose to the remainder of the gathering. I like this game since it isn't only for survey, it can really present new ideas or make new associations of thoughts for understudies. The game likewise doesn't keep track of who's winning or have a victor, it is essentially about making a conversation among understudies dependent on a point. I couldn't imagine anything better than to locate different forms of this game for more language expressions points. 4.) Another way that I intend to utilize music in my homerooms is to make melodies that assist understudies with looking into complex ideas and jargon words. I couldn't want anything more than to utilize my study hall as an outlet for my music, regardless of whether that implies making ELA raps or tunes. I truly like the possibility of Language Arts based raps since rap is an expressive style of music and fits heaps of words and important expressions. I figure this medium would be ideal to utilize music to get thoughts to latch onto understudy's subconscious minds. Rap is the most mainstream classification of music in America and I don't trust it is difficult to get understudies who are hopeful rappers to help or add to the music too. These melodies will be accessible online for understudies to tune in to at study time or for different instructors to use as an asset. Simply don't get some information about my MC name yet,

Friday, August 21, 2020

Andersen’s Case Auditing and Assurance Services

Arthur Andersen was one of the five greatest bookkeeping firms in United States. Arthur Andersen is explicit model about the latest business breakdown, which is viewed as significant in American business history. The firm, rehearsing in excess of 80 nations with a large number of representatives, presently lost its notoriety for being a review and bookkeeping firm. There are a wide range of considerations and decisions about the situation when Arthur Anderson bombed its errands to distinguish false money related exercises on Enron and a few distinctive companies.The Enron Company sought financial protection in December of 2001. Being notable as a major vitality organization which home office is situated in Houston, the chapter 11 of the organization drew parts off considerations from business world. Inquiries were posed and bunches of monetary errors were uncovered and Arthur Andersen †Enron’s outer inspecting and bookkeeping firm in Houston, was included. Stephens (2002) expressed, Arthur Andersen who were answerable for overseeing Enron’s reviews, had illicitly decimated paper and electronic records identified with their portrayal of Enron (p. 4). The case before long went to the jury. Louwers et al. (2011) detailed the directions depended on the wording of sculpture that makes it a wrongdoing to purposely utilize terrorizing or physical power, compromise, or corruptly convince other individual. As indicated by Oxford Dictionaries on the web, degenerate methods having or demonstrating an ability to act untrustworthily as an end-result of cash or individual addition. For this situation, the word â€Å"corrupt† was not applied suitably to the activities of Anderson. There was no proof to demonstrate that there is an individual who is degenerate persuader.Who was the one that acted deceptively? The court likewise needs to distinguish the individual increase that a degenerate persuader can pick up from having such activity. The court cou ldn't discover the responses to this issue, which implies the choice is appended to the firm’s destiny †Arthur Andersen. Research by Stephens (2002) states that to explain the guidance of Judge Harmon about â€Å"corruptly†, the arraignment portrays the unlawful lead of four degenerate persuaders: Duncan, Temple, Thomas Bauer, and Michael Odom.After a couple of days, the jury was not required to consistently concede to the single degenerate persuader’s personality (Stephens, 2002). As Louwers et al. (2011) expressed that the issues that upset the Andersen decision depended on defective jury guidelines, not on whether Andersen was in truth liable or guiltless. It was not Andersen who disregarded the law; it is people of the organization who did. Hoxter (2005) contended that the guidelines were defective in light of the fact that they didn't require evidence that Andersen authorities realized they were accomplishing something wrong.The administering crushing the records was not a wrongdoing. There must be an individual with power who conveyed the directions for Arthur Andersen workers to do as such. As Stephens (2002) repeated from the meetings with columnists after the decision, four members of the jury recognized Temple as that individual. It is appropriated for the Supreme Court to topple the lower court’s choice since Judge Harmon utilized the word â€Å"promptly† with various significance from the word reference, which is â€Å"improper purpose†, for example, plan to sabotage or subvert the reality discovering capacity of an authority proceeding.The lower court flopped in distinguishing the particular individual who is corruptive persuader. As indicated by Duska (2005), Arthur Anderson, an admired firm, at once, valued its job as examiner since it satisfied a significant open capacity. As reviewers, Andersen’s representatives ought to have had clear personalities to ensure that the budget summaries that they review are truly what were happening in the organization. Notwithstanding, large cash that was made in the counseling implies the duties that evaluators need to confront is heavier.The SEC and the branch of Justice should’ve attempted to focused explicit people who had occupied with acts since it appears not reasonable for the remainder of the firm. As per Cunningham (2005), â€Å"It is a disgrace that the activities of a couple of people at Andersen caused 28,000 representatives in the U. S. to lose their jobs† (p. 6). For sure, making the total firm bankrupt for the untrustworthy activities of a few people was the genuine sin. Andersen’s conviction was upset doesn't imply that its representatives acted in moral manner.McNamee and Palmeri (2002) gave data about Arthur Anderson’s tidying up crucial the Enron case began to topple. From Anderson’s interior archive, Enron group was additionally caught up with revising four key reminders to addres s the record of its survey of Enron’s tangle and clashed organization bargains (McNamee and Palmeri, 2002). Additionally, David Duncan overruled the worries on in any event four events, siding rather with Enron on disputable bookkeeping that assists with concealing obligation sum and raised the earnings.That isn't all, there are likewise confirms said that Duncan’s group composed notices which are erroneously expressed that PSG accomplices had approved Enron’s creative accounting. Nancy Temple is the lawyer for Arthur Andersen. She is the person who ought to unmistakably think pretty much all the guidelines and ought not be the person who acted dishonestly. As indicated by McNamee and Palmer (2002), notices from Nancy Temple to David Duncan were discovered which can thought about the principle key in the conviction of Duncan regarding destroying the reports. It is more for Nancy Temple who is progressively answerable for the Andersen saga.The legal claim against Andersen likewise has another name which incorporate different elements name in there in light of the fact that those elements additionally had associated with Enron’s case. These substances caused Enron to conceal in light of the fact that by helping them they make more benefit. From Arthur Andersen’s case, substances and bookkeeping firms in business world ought to learn exercises for their own products. On the off chance that the workers notice of any worried about the legitimateness of any records, they should contact the lawful office immediately. There is nothing incorrectly discarding business papers that are not needed.It is unlawful just when maintenance is legally necessary or guideline as indicated by Stephens (2002). Reference Page Hoxter, C. J. (2005). Arthur Andersen Conviction Overturned. Caribbean Business, 33(25), 8. Duska, R. (2005). The Good Auditor †Skeptic or Wealth Accumulator? Moral Lessons Learned from the Arthur Andersen Debacle. Diary Of Business Ethics, 57(1), 17-29. doi:10. 1007/s10551-004-3818-1 Stephens, D. O. (2002). Untruths, Corruption, and Document Destruction. Data Management Journal, 36(5), 23. McNamee, M. , Borrus, A. , and Palmeri, C. (2002). Wild AT ANDERSEN.Businessweek, (3777), 32-33. Cunningham, C. (2005, July-August). Regretting Andersen's downfall and the loss of review rivalry. Money related Executive, 21(6), 6. Recovered from http://go. galegroup. com. glbvv001. enmu. edu/ps/I. do? id=GALE%7CA134300985=2. 1=nm_a_enmu=r=EAIM=w In Oxford Dictionaries on the web. Recovered from http://oxforddictionaries. com/us/definition/american_english/degenerate? q=corrupt Louwers, T. J. , Ramsay, R. J. , Sinason, D. H, Strawser, J. R, and Thinodeau, J. C. (2011). Inspecting and Assurance Services (fifth, ed. ). NY: McGraw-Hill Irwin.

Analysis Of The Astronomers Wife Essay Example For Students

Investigation Of The Astronomers Wife Essay Investigation of The Astronomers WifeIn the Astronomers Wife by Kay Boyle, something as basic as aconversation with a handyman about a halted elbow is sufficient to trigger anawakening in Mrs. Katherine Ames. At the point when Mrs. Ames understood that the handyman wastalking about something she comprehended (the halted elbow), she understood thather conjugal issues were not the consequence of a division betwwen the sexes;instead, she understood that a few men, similar to the handyman, are as down to earth as sheis, and that some other men, similar to her better half, disdain individuals like her becausethey are mentally disposed. Past to this revelation, Katherine did notrealize that there were various types of men, and along these lines she did notrealize that she and her significant other were confused. Moreover, in her awakening,Mrs. Ames additionally finds that she, similar to the handyman, possesses as significant aplace in the public eye as the cosmologist, for she accomplishe s the grimy work to free peoplelike her better half to have the opportunity to think and to find. The scene being referred to happens after Mrs. Ames has just seen thatthe handyman has a couple of physical qualities that coordinate her own (such asblond hair), and she is conversing with him as he dives into the earth. The scenebegins following the handyman says I think something has halted theelbow, on the grounds that this expression was one of only a handful scarcely any things that a man has ever saidthat Mrs. Ames has comprehended. After the handyman has plunged into the groundbefore the scene, Mrs. Ames is the just one remaining. She spends the entireduration of this scene sitting on the grass, quietly thinking and uncovering herthoughts to the crowd. During her course of reasoning, Mrs. Ames makes the significant revelation thatthere is an entire race of useful individuals such as herself, people the same. She realized that when her better half discussed stature, having no feeling of it, shecould not picture it nor hear, however for some odd reason, when another man whohappened to be a plumer talked about his work, franticness in an every day shape, as elbowstopped, she saw plainly and well. Mrs Ames at long last acknowledged during thesethoughts that these were two men with two unique lifestyles, and perhapsher lifestyle fit the handymen more than the space experts, in that she toocould recognize just with every day concerns. The division between individuals in hermind was not, at this point just among people; it was presently the working and thethinking, the individuals who had constantly gone up, and other people who went down, as thecorporeal being of the dead. She presently perceived that there were both physicaland profound individuals, herself and the handyman being the previous, and herhusband being the last mentioned. The topic is uncovered in the manner that these two classes of individuals, thetoilers and the masterminds, respond to the world. The individuals who work with theirhands, when they see weeds jumping up, don't move to destroy them fromlife. At the end of the day, individuals like Mrs, Ames, after perceiving something thatoccupies a similar situation in the public eye that they do, for example, the frequently not well respected weed, don't feel constrained to obliterate it. Weeds, similar to the workers,although thought about monstrous, are as important for nature to be in balance as themore delightful blossom seems to be. Be that as it may, individuals like the space expert could balanceand separate, remove, crush. This shows individuals with grand ambitions,like the cosmologist, don't view the average citizens as vital for the worldto run easily, and would prefer to devastate them. The cosmologist does notrealize that by unclogging channels and performing other such tasks, those peo plehave permitted him to be allowed to consider enormous scope issues. Interactionbetween the two sorts of individuals is essential, regardless of whether it is possible that one understands it,for the world to work. .u15a9b39567f828189231e788705e870d , .u15a9b39567f828189231e788705e870d .postImageUrl , .u15a9b39567f828189231e788705e870d .focused content territory { min-stature: 80px; position: relative; } .u15a9b39567f828189231e788705e870d , .u15a9b39567f828189231e788705e870d:hover , .u15a9b39567f828189231e788705e870d:visited , .u15a9b39567f828189231e788705e870d:active { border:0!important; } .u15a9b39567f828189231e788705e870d .clearfix:after { content: ; show: table; clear: both; } .u15a9b39567f828189231e788705e870d { show: square; change: foundation shading 250ms; webkit-progress: foundation shading 250ms; width: 100%; darkness: 1; change: murkiness 250ms; webkit-change: mistiness 250ms; foundation shading: #95A5A6; } .u15a9b39567f828189231e788705e870d:active , .u15a9b39567f828189231e788705e870d:hover { haziness: 1; change: obscurity 250ms; webkit-change: darkness 250ms; foundation shading: #2C3E50; } .u15a9b39567f828189231e788705e870d .focused content region { width: 100%; position: relative; } .u15a9b39567f828189231e788705e870d .ctaText { outskirt base: 0 strong #fff; shading: #2980B9; text dimension: 16px; textual style weight: intense; edge: 0; cushioning: 0; content improvement: underline; } .u15a9b39567f828189231e788705e870d .postTitle { shading: #FFFFFF; text dimension: 16px; text style weight: 600; edge: 0; cushioning: 0; width: 100%; } .u15a9b39567f828189231e788705e870d .ctaButton { foundation shading: #7F8C8D!important; shading: #2980B9; fringe: none; outskirt sweep: 3px; box-shadow: none; text dimension: 14px; textual style weight: striking; line-tallness: 26px; moz-outskirt span: 3px; content adjust: focus; content adornment: none; content shadow: none; width: 80px; min-stature: 80px; foundation: url(https://artscolumbia.org/wp-content/modules/intelly-related-posts/resources/pictures/basic arrow.png)no-rehash; position: supreme; right: 0; top: 0; } .u15a9b39567f828189231e788705e870d:hover .ctaButton { foundation shading: #34495E!important; } .u15a9b39567f8281 89231e788705e870d .focused content { show: table; stature: 80px; cushioning left: 18px; top: 0; } .u15a9b39567f828189231e788705e870d-content { show: table-cell; edge: 0; cushioning: 0; cushioning right: 108px; position: relative; vertical-adjust: center; width: 100%; } .u15a9b39567f828189231e788705e870d:after { content: ; show: square; clear: both; } READ: What Is Science EssayThe Astronomers Wife is an incredible short story that brings out theoften overlooked point that both the viable individuals and the goal-oriented dreamersare significant for every others endurance. While Mrs. Ames maybe could neverget along without her better half, it was no shortcoming of her own that she didnt. Sheprovided an agreeable existance for the space expert with the goal that he would be free todo his work, and the marriage would have been more joyful if Mr. Ames recognizedall that she had done, and had thought of her as way of life a legitimate one. Of coursean understanding was never reached, on the grounds that in any case the creator would not havebeen ready to represent the comparative clashes that exist in todays society sowell.

Monday, July 13, 2020

What Algorithms Can (and Cant) Tell Us About Gender and Literature

What Algorithms Can (and Cant) Tell Us About Gender and Literature Last month, a story came out about five scholars whod set up an algorithm to read 3.5 million books. The five co-authors were looking for adjectives and gender. They mapped those more commonly used to describe women, and those more commonly used to describe men. The resulting paper, Unsupervised Discovery of Gendered Language through Latent-Variable Modeling, is hardly earth-shaking. In a nutshell, co-authors  Alexander Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cotterell write: Positive adjectives used to describe women are more often related to their bodies than adjectives used to describe men. Women are more likely to be called  pretty.  Men are more likely to be called  interesting. Study authors looked for the most common adjectives across 3.5 million books. Among the words most commonly used to describe women: beautiful, lovely, chaste, fertile. Among those most commonly used to describe men: just, sound, righteous, and rational. Lest you think there is some rational basis for these choices, the authors note: Even within 24 hours of birth, parents describe their daughters as beautiful, pretty, and cute far more often than their sons. Fewer Women Characters? Another recent paper to use large-scale data to look at gender in literature was   “The Transformation of Gender in English-Language Fiction,” published in the Journal of Cultural Analytics  in 2018. These co-authorsâ€"Ted Underwood, Sabrina Lee, and David Bammanâ€"set up a series of machine-learning models to look at 104,000 works of fiction written between 1700 and 2010. According to  The Economist,  this body of 104,000 books contains almost all classic novels, but only about half of the books that have been listed in Publishers Weekly[.] What did they find? For one thing, women apparently used to write more.  The percentage of books written by women fell from around 50% at the start of the 19th century to less than a quarter by the 1960s. It rebounded to around 40% in the 21st. They found that women characters also plummeted. Their modelâ€"which promises to guess gender of characters by names and pronouns with more than 90% accuracy saw that the share of the narrative given to fictional women declined over 150 years, before recovering slightly. The Limits of Algorithm Criticism Of course, as an  essay in  Aeon  notes,  AI criticism is limited by its human trainers. And some number crunching seems ripe for overreach, as when Matthew Jockers declared  there were only six (or sometimes seven) essential plots. In a critique in the  Chronicle of Higher Education,  Nan Z. Da  comes down hard on these big-data studies of literature. Da accuses this branch of the digital humanities of generating bad literary criticism, but also lacking  rigor. Its findings are either banal or, if interesting, not statistically robust. Da cites a 2019 book by Ted Underwood,  Distant Horizons,  that makes a range of assertions on the back of big data. This includes, according to Da, that gender becomes more blurred after 1840. Da quite reasonably suggestions that a blurring of gender descriptions might mean absolutely nothing. And indeed, it seems to contradict what the authors of  Unsupervised Discovery of Gendered Language through Latent-Variable Modeling  have found. In general, algorithm  findings seem to work betterâ€"at least thus farâ€"when they are working on more clear-cut patterns (number of women characters) rather than messier ones (essential plot types). In Unsupervised Discovery of Gendered Language through Latent-Variable Modeling, the co-authors acknowledge that their study has several limitations. First, their search ignores the demographics of the speaker. It also ignores genre, so that romance and cowboy fiction are all thrown into the same basket. And third, it ignores the time when a work was published: They looked through roughly 3.5 million published between  1900 to 2008. Hopefully, books published in 2006 were at least edging away from chaste, barren, and vivacious as ways of describing women. Insights with Google Ngrams Unscientifically, its always interesting to check out the Google Ngram to see when words fall in and out of use in the Google Books corpus. Use of the word  girlish,  for example, peaked around 1900. Use of the word  slut zigzags,  going up and down. It peaks in 1900, goes down, heads up around 1930, and then takes off in 1980. After that, slut  continues its rise.  Whore,  on the other hand, was more popular in 1810â€"at least according to Google Ngramâ€"than it is today. Interestingly, rape  is something we didnt talk about much before 1970, and that goes double for  sexual assault. Surely, there are things we can learn from algorithms about the way we talk about gender over time. But also, as with any bad use of data, much fluff and nonsense. Sign up to Today In Books to receive  daily news and miscellany from the world of books.

Tuesday, June 30, 2020

Transport layer security - Free Essay Example

The Transport Layer Security Protocol is used to communicate between client-server applications across a network. TLS helps in the communication preventing from the following Tampering, Eavesdropping, Message forgery. TLS provides the authentication at the endpoints and confidentiality over the network using cryptography and also it provides RSA security with 1024 and 2048 bit strengths In typical end-user/browser usage, TLS authentication is unilateral: only the server is authenticated (the client knows the servers identity), but not vice versa (the client remains unauthenticated or anonymous). TLS uses handshake protocol for the communication over internet. Following are the steps involved in TLS Handshake Protocol:- Both client and server exchanges Hello messages to agree on the algorithms exchange the random values and check for the session resumption between them. Both client and serer Exchange the necessary cryptographic parameters to agree on a premaster secret. The certificates and cryptographic information are exchanged between client and server for authenticating themselves. Generate a master secret from the premaster secret and exchanged random values. Security parameters will be provided to the record layer. It allows the client and server to verify that their peer has calculated the same security parameters and that the handshake occurred without tampering by an attacker. Note that higher layers should not be overly reliant on TLS always negotiating the strongest possible connection between two peers. There are a number of ways a man in the middle attacker can attempt to make two entities drop down to the least secure method they support. The protocol has been designed to minimize this risk, but there are still attacks available: for example, an attacker could block access to the port a secure service runs on, or attempt to get the peers to negotiate an unauthenticated connection. The fundamental rule is that higher levels must be cognizant of what their security requirements are and never transmit information over a channel less secure than what they require. The TLS protocol is secure, in that any cipher suite offers its promised level of security: if you negotiate 3DES with a 1024 bit RSA key exchange with a host whose certificate you have verified, you can expect to be that secure. The message that ends the handshake sends a hash of all the e xchanged data seen by both parties. The pseudo random function splits the input data in two halves and processes them with different hashing algorithms (MD5 and SHA), then XORs them together. This way it protects itself in the event that one of these algorithms is found vulnerable. The Windows Server 2003 operating system can use three related security protocols to provide authentication and secure communications over the Internet: Transport Layer Security Version 1.0 (TLS v1.0) Secure Socket Layer Version 3.0 (SSL 3.0) Secure Socket Layer Versions 2.0 (SSL 2.0) IPSEC IPsec is designed to provide interoperable, high quality, cryptographically-based security for IPv4 and IPv6. The set of security services offered includes access control, connectionless integrity, data origin authentication, protection against replays (a form of partial sequence integrity), confidentiality (encryption), and limited traffic flow confidentiality. These services are provided at the IP layer, offering protection for IP and/or upper layer protocols. These objectives are met through the use of two traffic security protocols, the Authentication Header (AH) and the Encapsulating Security Payload (ESP), and through the use of cryptographic key management procedures and protocols. The set of IPsec protocols employed in any context, and the ways in which they are employed, will be determined by the security and system requirements of users, applications, and/or sites/organizations. When these mechanisms are correctly implemented and deployed, they ought not to adversel y affect users, hosts, and other Internet components that do not employ these security mechanisms for protection of their traffic. These mechanisms also are designed to be algorithm-independent. This modularity permits selection of different sets of algorithms without affecting the other parts of the implementation. For example, different user communities may select different sets of algorithms (creating cliques) if required. A standard set of default algorithms is specified to facilitate interoperability in the global Internet. The use of these algorithms, in conjunction with IPsec traffic protection and key management protocols, is intended to permit system and application developers to deploy high quality, Internet layer, cryptographic security technology. The IPSec process This topic provides an overview of IPSec concepts that are central to understanding the IPSec process, including IPSec policy configuration and the Internet Key Exchange (IKE) protocol. In addition, this topic describes how IPSec network traffic processing works, using two intranet computers as an example. IPSec Policy Configuration In Windows2000, WindowsXP, and the Windows Server2003 family, IPSec is implemented primarily as an administrative tool that you can use to enforce security policies on IP network traffic. A security policy is a set of packet filters that define network traffic as it is recognized at the IP layer. A filter action defines the security requirements for the network traffic. A filter action can be configured to: Permit, Block, or Negotiate security (negotiate IPSec). IPSec filters are inserted into the IP layer of the computer TCP/IP networking protocol stack so that they can examine (filter) all inbound or outbound IP packets. Except for a brief delay required to negotiate a security relationship between two computers, IPSec is transparent to end-user applications and operating system services. A collective set of IPSec security settings is known as an IPSec policy. Windows2000, WindowsXP, and the Windows Server2003 family provide a graphical user interface and several command-li ne tools that you can use to configure an IPSec policy, and then assign it to a computer. To ensure that IPSec communication is successful and that IPSec meets the security requirements of your organization, you must carefully design, configure, coordinate, and manage IPSec policies. In many organizations, one administrator might be responsible for configuring and managing IPSec policies for many, if not all, computers. Internet Key Exchange (IKE) security associations The IKE protocol is designed to securely establish a trust relationship between each computer, to negotiate security options, and dynamically generate shared, secret cryptographic keying material. The agreement of security settings associated with keying material is called a security association, also known as an SA. These keys will provide authenticity, integrity, and optionally, encryption of IP packets that are sent using the security association. IKE negotiates two types of security associations: A main mode security association (the IKE security association that is used to protect the IKE negotiation itself). IPSec security associations (the security associations that are used to protect application traffic). You can configure IPSec policy settings for both types of security associations. The IPSec service interprets an IPSec policy, expanding it into the components that it needs to control the IKE negotiation. The IPSec policy contains one definition of a packet filter. The packet filter is interpreted in two ways: one uses only the address and identity information to allow IKE to establish a main mode SA (the IKE security association); the other allows IKE to establish the IPSec security associations (also known as quick mode security associations). IPSec network traffic processing The following illustration shows how IPSec works in terms of the IPSec components for two intranet computers. For simplicity, this example is of an intranet in which two computers have an active IPSec policy. Alice, using a data application on ComputerA, sends an application IP packet to Bob on ComputerB. The IPSec driver on ComputerA checks its outbound IP filter lists and determines that the packets should be secured. The action is to negotiate security, so the IPSec driver notifies IKE to begin negotiations. The IKE service on ComputerA completes a policy lookup, using its own IP address as the source and the IP address of ComputerB as the destination. The main mode filter match determines the main mode settings that ComputerA proposes to ComputerB. ComputerA sends the first IKE message in main mode, using UDP source port 500, destination port 500. IKE packets receive special processing by the IPSec driver to bypass filters. ComputerB receives an IKE main mode message requesting secure negotiation. It uses the source IP address and the destination IP address of the UDP packet to perform a main mode policy lookup, to determine which security settings to agree to. ComputerB has a main mode file that matches, and so replies to begin negotiation of the main mode SA. ComputerA and ComputerB now negotiate options, exchange identities, verify trust in those identities (authentication), and generate a shared master key. They have now established an IKE main mode SA. ComputerA and ComputerB must mutually trust each other. ComputerA then performs an IKE quick mode policy lookup, using the full filter to which the IPSec driver matched the outbound packet. ComputerA selects the quick mode security settings and proposes them, and the quick mode filter, to ComputerB. ComputerB also performs an IKE quick mode policy lookup, using the filter description offered by ComputerA. ComputerB selects the security settings required by its policy and compares those settings to those offered by computerA. ComputerB accepts one set of options and completes the remainder of the IKE quick mode negotiation to create a pair of IPSec security associations. One IPSec SA is inbound and one IPSec SA is outbound. The IPSec SAs are identified by a Security Parameter Index (SPI), which is inserted into the IPSec header of each packet sent. The IPSec driver on ComputerA uses the outbound SA to sign and, if required, encrypt the packets. If the network adapter can perform hardware offload of IPSec cryptographic functions, the IPSec driver formats the packets, but does not perform the IPSec cryptographic functions. The IPSec driver passes the packets to the network adapter driver, indicating whether the adapter must perform the IPSec cryptographic functions. The network adapter transmits the packets into the network. The network adapter driver at ComputerB receives the encrypted packets from the network. The SPI is used by the receiver of an IPSec packet to find the corresponding IPSec security association, with the cryptographic keys required to verify and decrypt the packets. If the network adapter can decrypt the packets in hardware, it verifies whether it can recognize the SPI. If it cannot decrypt the packets in hardware, or if it cannot recognize the SPI, it passes the packets up to the IPSec driver. The IPSec driver on ComputerB uses the inbound SA SPI to retrieve the keys required to validate authentication and integrity and, if required, to decrypt the packets. The IPSec driver converts the packets from IPSec format back to standard IP packet format. It passes the validated and decrypted IP packets to the TCP/IP driver, which passes them to the receiving application on ComputerB. The IPSec SAs continue to provide very strong, transparent protection for application data traffic. The IPSec SAs are automatically refreshed by an IKE quick mode negotiation for as long as the application sends and receives data. When the application stops sending and receiving data, the IPSec SAs become idle and are deleted. Typically, the IKE main mode SA is not deleted. By default, the main mode SA has a lifetime of 8 hours. You can configure the main mode SA lifetime to as short as 5 minutes to a maximum of 48 hours. Whenever more traffic is sent, a new quick mode is negotiated automatically to create two new IPSec SAs to protect application traffic. This process is rapid, because the main mode SA already exists. If a main mode SA expires, it is automatically renegotiated as needed. Advantage of TSL Encryption à ¢Ã¢â€š ¬Ã¢â‚¬  Both request and response bodies are protected from intermediate prying eyes. Server authenticated à ¢Ã¢â€š ¬Ã¢â‚¬  Clients who record the servers SSL certificate can monitor it to ensure it does not change over time (which could indicate a man-in-the-middle attack). Using a certificate signed by a signing authority can also provide a similar level of assurance for the client application. Easy setup à ¢Ã¢â€š ¬Ã¢â‚¬  No additional coding required, just configure the web server the advantages of SSL VPN are no any client software needed in the client computer, they just need a web browser that can support SSL protocol is enough, because no any client software needed in the client computer, so no any additional license cost needed for the client pc to connect to the host. besides that, it is easy to use and setup, so the IT department staff no need to worry about the configuration for the worker who want to use the VPN. Advantages of IPsec There are, however, advantages to doing it at the IP level instead of, or as well as, at other levels. IPsec is the most general way to provide these services for the Internet. Higher-level services protect a single protocol; for example PGP protects mail. Lower level services protect a single medium; for example a pair of encryption boxes on the ends of a line make wiretaps on that line useless unless the attacker is capable of breaking the encryption. IPsec, however, can protect any protocol running above IP and any medium which IP runs over. More to the point, it can protect a mixture of application protocols running over a complex combination of media. This is the normal situation for Internet communication; IPsec is the only general solution. IPsec can also provide some security services in the background, with no visible impact on users. To use PGP encryption and signatures on mail, for example, the user must at least: remember his or her passphrase, keep it secure follow procedures to validate correspondents keys These systems can be designed so that the burden on users is not onerous, but any system will place some requirements on users. No such system can hope to be secure if users are sloppy about meeting those requirements. TASK 2 The Internet Group Management Protocol (IGMP) is a communications protocol used to manage the membership of Internet Protocol multicast groups. IGMP is used by IP hosts and adjacent multicast routers to establish multicast group memberships. It is an integral part of the IP multicast specification, operating above the network layer, though it does not actually act as a transport protocol. It is analogous to ICMP for unicast connections. IGMP can be used for online streaming video and gaming, and allows more efficient use of resources when supporting these types of applications. IP multicast is a technique for one-to-many communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary. The most common low-level protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliableà ¢Ã¢â€š ¬Ã¢â‚¬ messages may be lost or delivered out of order. Reliable multicast protocols such as Pragmatic General Multicast (PGM) have been developed to add loss detection and retransmission on top of IP multicast. Key concepts in IP multicast include an IP multicast group address, a multicast distribution tree and receiver driven tree creation. An IP multicast group address is used by sources and the receivers to send and receive content. Sources use the group address as the IP destination address in their data packets. Receivers use this group address to inform the network that they are interested in receiving packets sent to that group. For example, if some content is associated with group 239.1.1.1, the source will send data packets destined to 239.1.1.1. Receivers for that content will inform the network that they are interested in receiving data packets sent to the group 239.1.1.1. The receiver joins 239.1.1.1. The protocol used by receivers to join a group is called the Internet Group Management Protocol (IGMP). Once the receivers join a particular IP multicast group, a multicast distribution tree is constructed for that group. The protocol most widely used for this is Protocol Independent Multicast (PIM). It sets up multicast distribution trees such that data packets from senders to a multicast group reach all receivers which have joined the group. For example, all data packets sent to the group 239.1.1.1 are received by receivers who joined 239.1.1.1. There are many different variations of PIM implementations: Sparse Mode (SM), Dense Mode (DM), Source Specific Mode (SSM) and Bidirectional Mode (Bidir, or Sparse-Dense Mode, SDM). Of these, PIM-SM is the most widely deployed as of 2006[update]; SSM and Bidir are simpler and sc alable variations developed more recently are gaining in popularity. IP multicast operation does not require a source sending to a given group to know about the receivers of the group. The multicast tree construction is initiated by network nodes which are close to the receivers or is receiver driven. This allows it to scale to a large receiver population. The IP multicast model has been described by Internet architect Dave Clark as follows: You put packets in at one end, and the network conspires to deliver them to anyone who asks. Multicast (top) compared with unicast broadcasting (bottom). Orange circles represent endpoints, and green circles represent routing points. IP multicast creates state information (state) per multicast distribution tree in the network, i.e., current IP multicast routing protocols do not aggregate state corresponding to multiple distribution trees. So if a router is part of 1000 multicast trees, it has 1000 multicast routing and forwarding entrie s. As a result there are worries about scaling multicast to large numbers of distribution trees. However, because multicast state exists only along the distribution tree it is unlikely that any single router in the Internet maintains state for all multicast trees. This is a common misunderstanding compared to unicast. A unicast router needs to know how to reach all other unicast addresses in the Internet, even if it does this using just a default route. For this reason, aggregation is key to scaling unicast routing. Also, there are core routers that carry routes in the hundreds of thousands because they contain the Internet routing table. On the other hand, a multicast router does not need to know how to reach all other multicast trees in the Internet. It only needs to know about multicast trees for which it has downstream receivers. This is key to scaling multicast-addressed services. It is very unlikely that core Internet routers would need to keep state for all multicast distribu tion trees they only need to keep state for trees with downstream membership. When this type of router joins a shared forwarding tree it is referred to as a graft and when it is removed it is called a prune. Multicast Process Figure 2 illustrates the process whereby a client receives a video multicast from the server. The client sends an IGMP join message to its designated multicast router. The destination MAC address maps to the Class D address of group being joined, rather being the MAC address of the router. The body of the IGMP datagram also includes the Class D group address. The router logs the join message and uses PIM or another multicast routing protocol to add this segment to the multicast distribution tree. IP multicast traffic transmitted from the server is now distributed via the designated router to the clients subnet. The destination MAC address corresponds to the Class D address of group The switch receives the multicast packet and examines its forwarding table. If no entry exists for the MAC address, the packet will be flooded to all ports within the broadcast domain. If a entry does exist in the switch table, the packet will be forwarded only to the designated ports. With IGMP V2, the client can cease group membership by sending an IGMP leave to the router. With IGMP V1, the client remains a member of the group until it fails to send a join message in response to a query from the router. Multicast routers also periodically send an IGMP query to the all multicast hosts group or to a specific multicast group on the subnet to determine which groups are still active within the subnet. Each host delays its response to a query by a small random period and will then respond only if no other host in the group has already reported. This mechanism prevents many hosts from congesting the network with simultaneous reports. PIM Protocol Independent Multicast (PIM) is a collection of multicast routing protocols, each optimized for a different environment. There are two main PIM protocols, PIM Sparse Mode and PIM Dense Mode. A third PIM protocol, Bi-directional PIM, is less widely used. Typically, either PIM Sparse Mode or PIM Dense Mode will be used throughout a multicast domain. However, they may also be used together within a single domain, using Sparse Mode for some groups and Dense Mode for others. This mixed-mode configuration is known as Sparse-Dense Mode. Similarly, Bi-directional PIM may be used on its own, or it may be used in conjunction with one or both of PIM Sparse Mode and PIM Dense Mode. All PIM protocols share a common control message format. PIM control messages are sent as raw IP datagrams (protocol number 103), either multicast to the link-local ALL PIM ROUTERS multicast group, or unicast to a specific destination. PIM Sparse Mode PIM Sparse Mode (PIM-SM) is a multicast routing protocol designed on the assumption that recipients for any particular multicast group will be sparsely distributed throughout the network. In other words, it is assumed that most subnets in the network will not want any given multicast packet. In order to receive multicast data, routers must explicitly tell their upstream neighbors about their interest in particular groups and sources. Routers use PIM Join and Prune messages to join and leave multicast distribution trees. PIM-SM by default uses shared trees, which are multicast distribution trees rooted at some selected node (in PIM, this router is called the Rendezvous Point, or RP) and used by all sources sending to the multicast group. To send to the RP, sources must encapsulate data in PIM control messages and send it by unicast to the RP. This is done by the sources Designated Router (DR), which is a router on the sources local network. A single DR is elected from all PIM rou ters on a network, so that unnecessary control messages are not sent. One of the important requirements of PIM Sparse Mode, and Bi-directional PIM, is the ability to discover the address of a RP for a multicast group using a shared tree. Various RP discovery mechanisms are used, including static configuration, Bootstrap Router, Auto-RP, Anycast RP, and Embedded RP. PIM-SM also supports the use of source-based trees, in which a separate multicast distribution tree is built for each source sending data to a multicast group. Each tree is rooted at a router adjacent to the source, and sources send data directly to the root of the tree. Source-based trees enable the use of Source-Specific Multicast (SSM), which allows hosts to specify the source from which they wish to receive data, as well as the multicast group they wish to join. With SSM, a host identifies a multicast data stream with a source and group address pair (S,G), rather than by group address alone (*,G). PIM-SM may use source-based trees in the following circumstances. For SSM, a last-hop router will join a source-based tree from the outset. To avoid data sent to an RP having to be encapsulated, the RP may join a source-based tree. To optimize the data path, a last-hop router may choose to switch from the shared tree to a source-based tree. PIM-SM is a soft-state protocol. That is, all state is timed-out a while after receiving the control message that instantiated it. To keep the state alive, all PIM Join messages are periodically retransmitted. Version 1 of PIM-SM was created in 1995, but was never standardized by the IETF. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-SM was standardized in RFC 2117 (in 1997) and updated by RFC 2362 (in 1998). Version 2 is significantly different from and incompatible with version 1. However, there were a number of problems with RFC 2362, and a new specification of PIM-SM version 2 is currently being produced by the IETF. There have been many implementations of PIM-SM and it is widely used. PIM Dense Mode PIM Dense Mode (PIM-DM) is a multicast routing protocol designed with the opposite assumption to PIM-SM, namely that the receivers for any multicast group are distributed densely throughout the network. That is, it is assumed that most (or at least many) subnets in the network will want any given multicast packet. Multicast data is initially sent to all hosts in the network. Routers that do not have any interested hosts then send PIM Prune messages to remove themselves from the tree. When a source first starts sending data, each router on the sources LAN receives the data and forwards it to all its PIM neighbors and to all links with directly attached receivers for the data. Each router that receives a forwarded packet also forwards it likewise, but only after checking that the packet arrived on its upstream interface. If not, the packet is dropped. This mechanism prevents forwarding loops from occurring. In this way, the data is flooded to all parts of the network. Some rout ers will have no need of the data, either for directly connected receivers or for other PIM neighbors. These routers respond to receipt of the data by sending a PIM Prune message upstream, which instantiates Prune state in the upstream router, causing it to stop forwarding the data to its downstream neighbor. In turn, this may cause the upstream router to have no need of the data, triggering it to send a Prune message to its upstream neighbor. This broadcast and prune behavior means that eventually the data is only sent to those parts of the network that require it. Eventually, the Prune state at each router wills time out, and data will begin to flow back into the parts of the network that were previously pruned. This will trigger further Prune messages to be sent, and the Prune state will be instantiated once more. PIM-DM only uses source-based trees. As a result, it does not use RPs, which makes it simpler than PIM-SM to implement and deploy. It is an efficient protocol whe n most receivers are interested in the multicast data, but does not scale well across larger domains in which most receivers are not interested in the data. The development of PIM-DM has paralleled that of PIM-SM. Version 1 was created in 1995, but was never standardized. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-DM is currently being standardized by the IETF. As with PIM-SM, version 2 of PIM-DM is significantly different from and incompatible with version 1. PIM Dense Mode (PIM DM) is less common than PIM-SM, and is mostly used for individual small domains. The current version of the Internet Protocol IPv4 was first developed in the 1970s, and the main protocol standard RFC 791 that governs IPv4 functionality was published in 1981. With the unprecedented expansion of Internet usage in recent years especially by population dense countries like India and China. The impending shortage of address space (avail ability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4. The following table shows a statistic showing how quickly the address space has been getting consumed over the years after 1981, when IPv4 protocol was published With admirable foresight, the Internet Engineering Task Force (IETF) initiated as early as in 1994, the design and development of a suite of protocols and standards now known as Internet Protocol Version 6 (IPv6), as a worthy tool to phase out and supplant IPv4 over the coming years. There is an explosion of sorts in the number and range of IP capable devices that are being released in the market and the usage of these by an increasingly tech savvy global population. The new protocol aims to effectively support the ever-expanding Internet usage and functionality, and also address security concerns. IPv6 uses a128-bit address size compared with the 32-bit system used in IPv4 and will allow for as many as 3.41038 possible addresses, enough to cover every inhabitant on planet earth several times over. The 128-bit system also provides for multiple levels of hierarchy and flexibility in hierarchical addressing and routing, a feature that is found wanting on the IPv4-based Internet. Internet Protocol version 6 (IPv6) is the next-generation Internet Protocol version designated as the successor to IPv4, the first implementation used in the Internet that is still in dominant use currently[update]. It is an Internet Layer protocol for packet-switched internetworks. The main driving force for the redesign of Internet Protocol is the foreseeable IPv4 address exhaustion. IPv6 was defined in December 1998 by the Internet Engineering Task Force (IETF) with the publication of an Internet standard specification, RFC 2460. IPv6 has a vastly larger address space than IPv4. This results from the use of a 128-bit address, whereas IPv4 uses only 32 bits. The new address space thus supports 2128 (a bout 3.4ÃÆ'—1038) addresses. This expansion provides flexibility in allocating addresses and routing traffic and eliminates the primary need for network address translation (NAT), which gained widespread deployment as an effort to alleviate IPv4 address exhaustion. IPv6 also implements new features that simplify aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from Link Layer media addressing information (MAC address). Network security is integrated into the design of the IPv6 architecture. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread optional deployment first in IPv4 (into which it was back-engineered). The IPv6 speci fications mandate IPSec implementation as a fundamental interoperability requirement. In December 2008, despite marking its 10th anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general worldwide deployment. A 2008 study by Google Inc. indicated that penetration was still less than one percent of Internet-enabled hosts in any country. IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments. IPv6 header format The new IPv6 header is illustrated in figure, while the IPv4 header is shown in Figure 2 to facilitate comparison between the two protocols. The IPv6 header fields are as follows: Version (4 bit): Indicates the protocol version, and will thus contain the number 6. DS byte (8 bit): This field is used by the source and routers to identify the packets belonging to the same traffic class and thus distinguish between packets with different priorities. flow label (20 bit): Label for a data flow Payload length (16 bit): Indicates the length of the packet data field. Next header (8 bit) identifies the type of header immediately following the IPv6 header. Hop limit (8 bit): Decremented by one by each node that forwards the packet. When the hop limit field reaches zero, the packet is discarded. Source address (128 bit): The address of the originator of the packet. Destination address ( 128 bit) : The address of the intended recipient of the packet. Compared to IPv4, header format is simpler, which permits better performance. The decision to eliminate the checksum springs from the fact that it is already computed at layer 2, which is sufficient in view of the error rate of current networks. Better performance is thus achieved, as the routers no longer need to re-compute the checksum for each packet. On the debit side, eliminating the checksum means that there is no protection against the errors routers can make in processing packets. However, these errors are not dangerous for the network, as they cause only the packet itself to be lost if there are fields with invalid values (e.g., nonexistent addresses). The hop limit field indicates the maximum number of nodes (hops) that a packet can cross before reaching destination. In IPv4, this field is expressed in seconds (TTL: Time To Live), even though it has the same function. The change was made for two reasons. First, for the sake of simplicity: even in IPv4, in fact, the routers translate seconds into number of hops, which are then translated back into seconds. Second, the change ensures freedom from physical network characteristics such as bandwidth. As the hop limit field consists of 8 bits, the maximum number of nodes that a packet can cross is 255. The advantages IPv6 offers over IPv4:- Larger address space The most important feature of IPv6 is a much larger address space than that of IPv4: addresses in IPv6 are 128 bits long, compared to 32-bit addresses in IPv4. An illustration of an IP address (version 6), in hexadecimal and binary. The very large IPv6 address space supports a total of 2128 (about 3.4ÃÆ'—1038) addressesà ¢Ã¢â€š ¬Ã¢â‚¬ or approximately 5ÃÆ'—1028 (roughly 295) addresses for each of the roughly 6.5 billion (6.5ÃÆ'—109) people alive in 2006. In another perspective, there is the same number of IP addresses per person as the number of atoms in a metric ton of carbon. The size of a subnet in IPv6 is 264 addresses (64-bit subnet mask), the square of the size of the entire IPv4 Internet. Thus, actual address space utilization rates will likely be small in IPv6, but network management and routing will be more efficient because of the inherent design decisions of large subnet space and hierarchical route aggregation. Stateless address auto configuration IPv6 hosts can configure themselves automatically when connected to a routed IPv6 network using ICMPv6 router discovery messages. When first connected to a network, a host sends a link-local multicast router solicitation request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters. Multicast Multicast, the ability to send a single packet to multiple destinations, is part of the base specification in IPv6. This is unlike IPv4, where it is optional (although usually implemented). IPv6 does not implement broadcast, which is the ability to send a packet to all hosts on the attached link. The same effect can be achieved by sending a packet to the link-local all hosts multicast group. It therefore lacks the notion of a broadcast addressà ¢Ã¢â€š ¬Ã¢â‚¬ the highest address in a subnet (the broadcast address for that subnet in IPv4) is considered a normal address in IPv6. Mandatory network layer security Internet Protocol Security (IPsec), the protocol for IP encryption and authentication, forms an integral part of the base protocol suite in IPv6. IPsec support is mandatory in IPv6; this is unlike IPv4, where it is optional (but usually implemented). IPsec, however, is not widely used at present except for securing traffic between IPv6 Border Gateway Protocol routers. Simplified processing by routers A number of simplifications have been made to the packet header, and the process of packet forwarding has been simplified, in order to make packet processing by routers simpler and hence more efficient. Concretely, The packet header in IPv6 is simpler than that used in IPv4, with many rarely used fields moved to separate options; in effect, although the addresses in IPv6 are four times larger, the (option-less) IPv6 header is only twice the size of the (option-less) IPv4 header. IPv6 routers do not perform fragmentation. IPv6 hosts are required to either perform PMTU discovery, perform end-to-end fragmentation, or to send packets smaller than the IPv6 minimum MTU size of 1280 octets. The Time-to-Live field of IPv4 has been renamed to Hop Limit, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue. Mobility Unlike mobile IPv4, Mobile IPv6 (MIPv6) avoids triangular routing and is therefore as efficient as normal IPv6. IPv6 routers may also support Network Mobility (NEMO) [RFC 3963] which allows entire subnets to move to a new router connection point without renumbering. However, since neither MIPv6 nor MIPv4 or NEMO are widely deployed today, this advantage is mostly theoretical. Options extensibility IPv4 has a fixed size (40 octets) of option parameters. In IPv6, options are implemented as additional extension headers after the IPv6 header, which limits their size only by the size of an entire packet. The extension header mechanism allows IPv6 to be easily extended to support future services for QoS, security, mobility, etc. without a redesign of the basic protocol. Jumbograms IPv4 limits packets to 65535 (216 1) octets of payload. IPv6 has optional support for packets over this limit, referred to as jumbograms, which can be as large as 4294967295 (232 1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option header. TASK 5 An intrusion detection system (IDS) is a device (or application) that monitors network and/or system activities for malicious activities or policy violations. Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices.[1] Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents.[1] Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.[1] In addition, organizations use IDPSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security polici es.[1] IDPSs have become a necessary addition to the security infrastructure of nearly every organization. IDSes are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based: Active and passive IDS An active IDS (now more commonly known as an intrusion prevention system à ¢Ã¢â€š ¬Ã¢â‚¬  IPS) is a system thats configured to automatically block suspected attacks in progress without any intervention required by an operator. IPS has the advantage of providing real-time corrective action in response to an attack but has many disadvantages as well. An IPS must be placed in-line along a network boundary; thus, the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic havent been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack by intentionally flooding the system with alarms that cause it to block connections until no connections or bandwidth are available. Intrusion prevention systems evolved in the late 1990s to resolve ambiguities in passive network monitoring by placing detection systems in-line. Early IPS were IDS th at were able to implement prevention commands to firewalls and access control changes to routers. This technique fell short operationally for it created a race condition between the IDS and the exploit as it passed through the control mechanism. Inline IPS can be seen as an improvement upon firewall technologies, IPS can make access control decisions based on application content, rather than IP address or ports as traditional firewalls had done. However, in order to improve performance and accuracy of classification mapping, most IPS use destination port in their signature format. As intrusion prevention systems were originally a literal extension of intrusion detection systems, they continue to be related. Intrusion prevention systems may also serve secondarily at the host level to deny potentially malicious activity. There are advantages and disadvantages to host-based IPS compared with network-based IPS. In many cases, the technologies are thought to be complementary. An In trusion Prevention system must also be a very good Intrusion Detection system to enable a low rate of false positives. Some IPS systems can also prevent yet to be discovered attacks, such as those caused by a buffer overflow. A passive IDS is a system thats configured only to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It isnt capable of performing any protective or corrective functions on its own. The major advantages of passive IDSes are that these systems can be easily and rapidly deployed and are not normally susceptible to attack themselves. Network-based and host-based IDS A Network Intrusion Detection System (NIDS) is an intrusion detection system that tries to detect malicious activity such as denial of service attacks, port scans or even attempts to crack into computers by monitoring network traffic. A NIDS reads all the incoming packets and tries to find suspicious patterns known as signatures or rules. If, for example, a large number of TCP connection requests to a very large number of different ports are observed, one could assume that there is someone conducting a port scan of some or all of the computer(s) in the network. It also (mostly) tries to detect incoming shellcodes in the same manner that an ordinary intrusion detection system does. A NIDS is not limited to inspecting incoming network traffic only. Often valuable information about an ongoing intrusion can be learned from outgoing or local traffic as well. Some attacks might even be staged from the inside of the monitored network or network segment, and are therefore not regarde d as incoming traffic at all. A network-based IDS usually consists of a network appliance (or sensor) with a Network Interface Card (NIC) operating in promiscuous mode and a separate management interface. The IDS is placed along a network segment or boundary and monitors all traffic on that segment. A host-based IDS requires small programs (or agents) to be installed on individual systems to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can only monitor the individual host systems on which the agents are installed; it doesnt monitor the entire network. A host-based IDS monitors all or parts of the dynamic behaviour and the state of a computer system. Much as a NIDS will dynamically inspect network packets, a HIDS might detect which program accesses what resources and discover that, for example, a word-processor has suddenly and inexplicably started modifying the system password database. Similarly a HIDS might look at the state of a system, its stored information, whether in RAM, in the file system, log files or elsewhere; and check that the contents of these appear as expected. One can think of a HIDS as an agent that monitors whether anything or anyone, whether internal or external, has circumvented the systems security policy. Monitoring dynamic behavior Many computer users have encountered tools that monitor dynamic system behaviour in the form of anti-virus (AV) packages. While AV programs often also monitor system state, they do spend a lot of their time looking at who is doing what inside a computer and whether a given program should or should not have access to particular system resources. The lines become very blurred here, as many of the tools overlap in functionality. Monitoring state The principle operation of a HIDS depends on the fact that successful intruders (crackers) will generally leave a trace of their activities. (In fact, such intruders often want to own the computer they have attacked, and will establish their ownership by installing software that will grant the intruders future access to carry out whatever activity (keystroke logging, identity theft, spamming, botnet activity, spyware-usage etc.) they envisage. In theory, a computer user has the ability to detect any such modifications, and the HIDS attempts to do just that and reports its findings. Ideally a HIDS works in conjunction with a NIDS, such that a HIDS finds anything that slips past the NIDS. Ironically, most successful intruders, on entering a target machine, immediately apply best-practice security techniques to secure the system which they have infiltrated, leaving only their own backdoor open, so that other intruders can not take over their computers. Knowledge-based and behavior-based IDS A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDS is currently more common than behavior-based IDS. Advantages of knowledge-based systems include the following: It has lower false alarm rates than behavior-based IDS. Alarms are more standardized and more easily understood than behavior-based IDS. Disadvantages of knowledge-based systems include these: Signature database must be continually updated and maintained. New, unique, or original attacks may not be detected or may be improperly classified. A behavior-based (or statistical anomaly-based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered. Advantages of behavior-based systems include that they Dynamically adapt to new, unique, or original attacks. Are less dependent on identifying specific operating system vulnerabilities. Disadvantages of behavior-based systems include Higher false alarm rates than knowledge-based IDSes. Usage patterns that may change often and may not be static enough to implement effective behavior-based IDS. BENEFITS OF AN IDS In todays corporate market, the majority of businesses consider the Internet as a major tool for communication with their customers, business partners and the corporate community. This mentality is here to stay; as a result businesses need to consider the risks associated with using the Internet as communication tool, and the methods available to them to mitigate these risks. Many businesses are already aware of the types of risks that they are facing, and have implemented measures such as Firewalls, Virus detection software, access control mechanisms etc. However it is all too apparent that although these measures may deter the hobby hacker, the real danger and threat comes from the determined hacker. The determined hacker is just that determined and they will find a way of penetrating your system, sometimes for malicious intent but mostly because they can and it is a test of skills. Whilst the above mentioned tools are preventative measures, an IDS is more of an analysis tool, th at will give you the following information: Instance of attack Method of attack Source of attack Signature of attack This type of information is becoming increasingly important when trying to design and implement the right security programmed for an organization. Although some of this information can be found in devices such as Firewalls and access control systems as they all contain log information on system activity In these instances the onus is on the administrator to check the logs to determine if an attempted attack has occurred or after the event find out when the attack occurred and the source of the attack. Usually information pertaining to the method of the attack and the signature of the attack cannot be found in the logs. This is because devices such as Firewalls are designed to check the IP packet header information and not the payload portion of the IP packet. An IDS will check the payload of the packet to determine if the pattern of data held within, matches that of a known attack signature. The benefits of the above information are as follows: Instance of attack: An IDS will al ert when an attack is in progress, this gives you the benefit of counteracting the attack as it happens, without having to go through lengthy logs to find out when this particular attack occurred. Method of attack: An IDS will let you know what area of your network or system on your network is under attack and how it is being attacked. This enables you to react accordingly and hopefully limit the damage of the attack by i.e. disabling communications to these systems. Source of attack: An IDS will let you know the source of an attack, it is then down to the administrator to determine if it is a legitimate source. By determining the legitimacy of the source the administrator is able to determine if he/she can disable communications from this source. Signature of attack: An IDS will identify the nature of the attack, and the pattern of the attack and alert accordingly. This information alerts the organization to the types of vulnerabilities that they are susceptible to and per mits them to take precautions accordingly. The above information allows an organisation to: Build a vulnerability profile of their network and the required precautions Plan its corporate defence strategy Budget for security expenditure. LIMITATIONS OF IDS Network intrusion detection systems are unreliable enough that they should be considered only as secondary systems designed to backup the primary security systems. Primary systems such as firewalls, encryption, and authentication are rock solid. Bugs or misconfiguration often lead to problems in these systems, but the underlying concepts are provably accurate. The underlying concepts behind NIDS are not absolutely accurate. Intrusion detection systems suffer from the two problems whereby normal traffic causes many false positives (cry wolf), and careful hackers can evade or disable the intrusion detection systems. Indeed, there are many proofs that show how network intrusion detection systems will never be accurate. Switched network (inherent limitation) Switched networks poses dramatic problems to network intrusion detection systems. There is no easy place to plug in a sensor in order to see all the traffic. For example, somebody on the same switched fabric as the CEO has free reign to attack the CEOs machine all day long, such as with a password grinder targetting the File and Print sharing. There are some solutions to this problem, but not all of them are satisfactory. Resource limitations Network intrusion detection systems sit at centralized locations on the network. They must be able to keep up with, analyze, and store information generated by potentially thousands of machines. It must emulate the combined entity of all the machines sending traffic through its segment. Obviously, it cannot do this fully, and must take short cuts. Network traffic loads Current NIDS have trouble keeping up with fully loaded segments. The average website has a frame size of around 180-bytes, which translates to about 50,000 packets/second on a 100-mbps Ethernet. Most IDS units cannot keep up with this speed. Most customers have less than this, but it can still occasionally be a concern. TCP connections IDS must maintain connection state for a large number of TCP connections. This requires extensive amount of memory. The problem is exacerbated by evasion techniques, often requiring the IDS to maintain connection information even after the client/server have closed it. Reasons to Acquire IDSs Intrusion detection capabilities are rapidly becoming necessary additions to every large organizations security infrastructure. The question for security professionals should not be whether to use intrusion detection, but which features and capabilities to use. However, one must still justify the purchase of an IDS. There are at least three good reasons to justify the acquisition of IDSs: to detect attacks and other security violations that cannot be prevented, to prevent attackers from probing a network, and to document the intrusion threat to an organization. Detecting attacks that cannot be prevented Attackers, using well-known techniques, can penetrate many networks. This often happens when known vulnerabilities in the network cannot be fixed. For instance, in many legacy systems, the operating systems cannot be updated. In updateable systems, administrators may not have or take the time to install all the necessary patches in a large number of hosts. In addition, it is usually not possible to perfectly map an organizations computer use policy to its access control mechanisms and thus authorized users often can perform unauthorized actions. Users may also demand network services and protocols that are known to be flawed and subject to attack. Although, ideally, we would fix all vulnerabilities, this is seldom possible. Therefore, an excellent approach for protecting a network may be to use an IDS to detect when an attacker has penetrated a system using an uncorrectable flaw. It is better at least to know that a system has been penetrated so that administrators can perform dama ge control and recovery than not to know that the system has been penetrated. Preventing attackers from probing a network A computer or network without an IDS may allow attackers to leisurely and without retribution explore its weaknesses. If a single, known vulnerability exists in such a network, a determined attacker will eventually find and exploit it. The same network with an IDS installed is a much more formidable challenge to an attacker. Although the attacker may continue to probe the network for weaknesses, the IDS should detect these attempts, may block these attempts, and can alert security personnel who can take appropriate action. Documenting the threat It is important to verify that a network is under attack or likely to be attacked to justify spending money for securing the network. Furthermore, it is important to understand the frequency and characteristics of attacks in order to understand what security measures are appropriate for the network. IDSs can itemize, characterize, and verify the threat from both outside and inside attacks, thereby providing a sound foundation for computer security expenditures. Using IDSs in this manner is important, since many people mistakenly believe that no one (outsiders or insiders) would be interested in breaking into their networks. DISADVANTAGE: Implementations of IDS vary based on the security needs of the network or host it is being implemented on. As we have seen, there isnt a universal implementation of an IDS model that can provide the best intrusion detection monitoring in all environments. Complex architectures require complex IDS implementations which will also require a high degree in IDS expertise to deploy and maintain. However, even with the highest level of IDS expertise, intrusions cannot be fully shut out. The IDS techniques themselves do not offer a foolproof system to detect ALL the intrusions an attack can consist of. The information below details some of these shortcomings. Anomaly Detection Disadvantages Since anomaly detection operates by defining a normal model of system or network behavior, it usually suffers from a large number of false alarms due to the unpredictable behaviors of users and networks. These behaviors may not have malicious intent. In fact, an anomaly-based IDS that has a detection rate of 20 false alarms to 1 real intrusion detection is considered good. This is due to the fact that normal system and network activity is, for the most part, very dynamic and very difficult to capture and predict. Anomaly detection approaches often require extensive training sets of network or system event records in order to characterize normal behavior patterns. These training sets can consist of various logs that capture the normal usage of the subject or object being monitored. Once the training sets are defined, they need to be fed into the anomaly detection engine to create a model of the normal system usage. Misuse Detection Disadvantages Since misuse detection operates by comparing known intrusive signatures against the observed log, misuse detectors suffer from the limitation of only being able to detect attacks that are known. Therefore, they must be constantly be updated with attack signatures that represent newly discovered attacks or modified existing attacks. Vulnerable to evasion. Once a security hole has been discovered and a signature has been written to capture it, several other iterations of copycat exploitations usually surface to take advantage of the same security hole. Since the attack method is a variant of the original attack method, it usually goes undetected by the original vulnerability signature, requiring the constant rewrite of signatures. Many misuse detectors are designed to use tightly defined signatures that prevent them from detecting variants of common attacks. Host-Based IDS Disadvantages The implementation of HIDS can get very complex in large networking environments. With several thousand possible endpoints in a large network, collecting and auditing the generated log files from each node can be a daunting task. If the IDS system is compromised, the host may cease to function resulting in a stop on all logging activity. Secondly, if the IDS system is compromised and the logging still continues to function, the trust of such log data is severely diminished. Network-Based IDS Disadvantages Network-based intrusion detection seems to offer the most detection coverage while minimizing the IDS deployment and maintenance overhead. However, the main problem with implementing a NIDS with the techniques described in the previous sections is the high rate of false alarms. Modern day enterprise network environments amplify this disadvantage due to the massive amounts of dynamic and diverse data that needs to be analyzed. All the previously defined IDS techniques have their share of disadvantages. There just isnt a single IDS model that offers 100% intrusion detection with a 0% false alarm rate that can be applied in todays complex networking environment. However, incorporating multiple IDS techniques can, to a certain extent, minimize many of the disadvantages illustrated in the previous section.

Thursday, May 21, 2020

Vaccines And The Health Of The Public - Free Essay Example

Sample details Pages: 3 Words: 798 Downloads: 5 Date added: 2019/07/29 Category Medicine Essay Level High school Tags: Vaccines Essay Did you like this example? Vaccines have been used all across America since the formation of the U.S. Vaccine Agency in 1812. Unlike medicines, vaccinations are used as an attempt to prevent health problems rather than treat or cure them. Don’t waste time! Our writers will create an original "Vaccines And The Health Of The Public" essay for you Create order Because of their influence, many deadly diseases today are not as widespread. When a child is injected with a vaccine, they are actually being injected with a very weak version of a disease. This allows their immune system to become stronger against it as the child gets older, fighting against stronger versions of the virus in order to prevent being affected by them. However, some people still choose to not get vaccinated. This can potentially be harmful to those that have had their shots, due to the unvaccinated child or adult increasing the risk of disease. It is because of this great risk that vaccinations should be mandatory, in order to protect the general public. Despite knowing that vaccines do not always work 100% already, anti-vaxxers argue that they do not work at all. This is untrue, and had been proven wrong many times. One source says that, [] in the 2007 Journal of the American Medical Association, [a study] concluded that [vaccines] have dramatically lowered the incidence of many severe illnesses []. Rubella [] infected roughly 48,000 people a year in the mid-20th century; today, that number is less than two dozen (Mandatory Vaccination). Other infections and viruses have almost been completely eliminated. While the fact that vaccines are not 100% effective still stands, it does not mean that they do not offer any protection at all. Some people can not be vaccinated due to health issues, so they rely on others to be vaccinated in order to kill off diseases. This is known as herd immunity. If someone is not vaccinated and does not have a health issue that will go against it, they contribute against the percentage. They have a chance o f causing an outbreak of viruses. Not only should children be vaccinated, but adults should visit with their doctors regularly to see if they should be vaccinated as well. Each year, around 40,000 Americans die of diseases that could have been prevented by routine vaccine checkups (Missed shots: adult vaccines). Many adults are uninformed about getting vaccinations. The main cause of this is that people are more focused on only child vaccinations, rather than getting all people vaccinated. Not only is this a problem in America, but in other countries as well. One multi-country survey found that 60% of adults say they have not received government-provided information on the importance of adult vaccinations (Survey finds). A doctor named Harold C. Neu explains how easy it is to get deadly diseases without vaccines (Barnhill). For example, a person could get tetanus from just cutting themselves with rusty metal. The risk of this is greatly decreased with vaccines: statistics from Centers for Disease Control and Preventi on show that a little less than half of Americans aged 60 and older do not even have the antibodies to defend against tetanus without vaccination (Frick). The laws in the U.S. have also contributed in endangering the public from these diseases. Today, all 50 U.S. states have laws requiring parents to get their children vaccinated. However, there are exemptions to these laws. As of 2016, 47 out of 50 states have religious exemptions to these laws and 17 have philosophical exemptions (Vaccination Rumors). Essentially, the country has people that are not being vaccinated due to non-medical reasons. In some states, all that a child needs to be exempted from these vaccinations is a parents signature (Mandatory Vaccination). One persons choice should not be able to affect the lives of many others. There are many arguments that people against mandatory vaccines have. Some people believe that vaccines can be harmful, due to a study from The Lancet in 1998 that claimed that vaccinations may cause autism and other mental or physical problems (Mandatory Vaccination). Despite this being dubbed as a discredited source, some parents still do not vaccinate their children out of fear, thus increasing the risk of easily preventable diseases to affect their children. Other people believe that simply eating more healthily can make up for not getting vaccinations. While eating healthy is good, it can not prevent bacteria from entering your body or defend against them. Many of these arguments have a huge lack of evidence to defend them. In conclusion, vaccines are not meant to harm people, but rather protect them. Misinformation is what affects the publics opinion so strongly, not science. Without vaccinations, we would not have nearly eradicated diseases like polio and diphtheria. State governments are here to serve the people, and they can not do that without protecting them. By making vaccines mandatory for all people, without health issues, they would be able to prevent deadly diseases from killing large populations of citizens.