Introduction
Majority of the computer systems utilises a local computer for application and data software. Cloud computing is a relatively new term in the field of computer systems and as such, it still lacks a clear and strict definition while in use in various contexts. Technological changes have been noted as a leading cause for the complexity in defining the term. Nonetheless, cloud computing can be defined as the management and provision of applications, information and data on consumption-based model. In cloud computing, applications and data reside in a remote server where the users access these via the internet/network. According to a Sun white paper, what distinguishes cloud computing from previous models of computing is the fact that it involves the use of information technology as a service over the network. In light of this, there is the need to evaluate what cloud computing is, how it works, where and how it is used, its effects on different areas, its benefits, its cons and the issues that arise as a result of its usage.
Main Discussion
How does cloud computing work? The basic infrastructure of cloud computing consists of local personal computers, network infrastructure and remote servers. Clients’ PCs are connected to virtual servers via the network. Clients work on the local personal computers (Rittinghouse 39). The personal computer may have low specifications (RAM size, processor speed etc) since nothing is stored locally. Through these PCs, the users access the data and the processing services from the remote servers over the internet. There are a number of components in cloud computing and these include virtualization, open source software and on-demand deployment (Sun 2).
Cloud computing is used in various areas and for various purposes and for this reason different users use cloud computing for different reasons (Vaquero, Caceres and Lindner 1108). For example, a firm may use cloud computing to ensure a faster and more powerful server to perform its data processing, to secure large quantities of data, to clone applications in order to handle a sudden workload and so on. Cloud computing affects enterprises, users and software firms in various ways. For instance, it can transform the way applications are designed, built and delivered. In a traditional model, the users buy software and install it into the local machine; future updates of the software can then be downloaded and installed into the application. With cloud computing though, the user does not install software; but rather, the software is hosted in a remote virtual server (cloud). The cloud vendor also provides additional services such as security, data recovery facilities ands so on. The updating and maintenance of software are performed directly during the remote installation, and the client is excluded from this process (Weiss 674). Cloud computing also affects data centers. The rationale behind the use of cloud computing technology is that the processing and storage of data often occurs in a non-physical form. Such processing and storage of data is often maintained over the internet by organisations in designated data centres.
An organization that uses cloud computing does not need to have a physical data centre within or outside its premises. Some though still maintain data centres for security measures. Cloud computing affects the way in which data and software services are paid for. It is associated with applications that are considered as ‘Software’, ‘Service’, or ‘Saas’. In this model of software distribution, a service provider or vendor normally houses applications. In turn, the internet is then used to avail the applications to clients. Here, the clients pay for the services, usually at a time based rate or as use based rate.
Just like in any other form of technology, cloud computing has its own advantages and disadvantages. One advantage that cloud computing provides to its users is its low cost. Costs are imperative in any business, non-profit organizations and also to individual consumers. Various companies are currently offering very attractive packages for their cloud based processing systems. One example of these companies is Osprey, a UK company that offers Information management software to law firms, among other products and services. The company is currently offering an Information Management System, at a cost of £185 per month in a package that includes; up to three user licenses of the software product, the engineering set up, hosting of the Software, real time back up at multi locations in UK and automatic disaster recovery, software updating, support services, unlimited online user training and system and data security (Buyya 18). Previously, the offline systems that they offered cost much more and included less services and licences. Another advantage of cloud side computing is that it is capable of solving problems provided by traditional data centres. There are many applications that cannot be run on one set of standard platform configurations. The operational requirements of a majority of the critical applications in cloud computing are quite extreme. Accordingly, the operating environments for such applications calls for specialised adjustments. Accommodating these different application behaviours results in higher costs and increased system management complexity, especially for critical applications. Through the use of new and evolving technologies, cloud computing can harness the power of distributed computing (Vogels 456). According to Tony Bishop, “The promise of cloud computing is providing significantly improved user experience, while balancing cost and efficiency – three critical pillars that must be satisfied for a business to succeed”. With a pay-as-you-go model, users are only charged for the amount of traffic, bandwidth, and memory used. Online businesses become more efficient by only utilizing the storage and space needed, while also being assured capacity for any increase in usage (Menken 231). Other advantages of cloud computing include; increased run time and response time, increased pace of innovation since the low cost of entry to new markets helps to level the playing field, allowing start-up companies to deploy new products promptly and at low cost, increased compatibility where all clients use the same version of software (Bishop 4).
On the other hand, cloud computing comes with some few consequences. Internet connection is mandatory for the system to be useful. In addition, the system cannot be used by firms without network connections, and in case the network goes down, then work cannot be done. Another problem arises with use of peripherals such as printers and scanners. This presents problems especially in using small sized peripherals. The implementation of cloud computing in an organisation requires that a third party provides this service. This is important for purposes of ensuring the information confidentiality and security. When it comes to security issues, handling your own security gives one a sense of confidence as compared to when the security task is entrusted to a third party (Techsuperb 9). Another issue with cloud computing is that the specific location of data is unknown, personal identifiable information can be distorted and any issues that arise may prove to be difficult to investigate, since customers share their hosting space. In addition, the current legal systems have not been developed to handle issues that are presented by cloud computing and other related issues.
One major issue that arises with cloud computing is security. Web based systems face much higher security threats than offline systems. The threats arise from viruses, crackers (usually referred to as hackers) and malware, insider threats among others (Armbrust, Fox, Griffith, Anthony, Katz, Konwinski, Lee, Patterson, Rabkin, Stoica, and Zaharia 47). The security provided to systems and data in cloud computing is very high compared to locally hosted systems. Some analysts therefore insist that data and systems security is better in cloud based systems than in client based systems. On the other hand, others claim that these cannot be definitive since cloud systems face much more diverse security threats.
Conclusion
Cloud computing is a new technology that is currently viewed as one of the major items in computing. It involves using information technology (data and software services) as a service over the network. Cloud computing has changed the manner in which software is developed, deployed and updated, has influenced how application and data services are paid for, as well as how data centers are designed, the infrastructure on which applications are run and how employees work. Cloud computing is built on existing infrastructures and adds new technologies. It promises lower costs, increased response time and more innovation. It however faces challenges such as insecurity and user acceptability.
Works Cited
Armbrust, Michael, Fox, Armando, Griffith, Rean, Anthony, Joseph. Katz, Randy, Konwinski, Andy, Lee, Gunho, Patterson, David, Rabkin, Ariel, Stoica, Ion and Zaharia, Matei. Above the Clouds: A Berkeley View of Cloud Computing. California: University of California, 2009. Print.
Bishop, Tony. How The Enterprise Cloud Computing Affects The Datacenter. Weblogic. 2009. Web.
Buyya, Rajkumar, Yeo, Chee. S, Venugopal, Srikumar, Broberg, James, and Brandic, Ivona. Cloud Computing And Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility. Future Generation Computing Systems, 25.6(2009).
Menken, Ivanka. Cloud Computing: The Complete Cornerstone Guide to Cloud Computing Best Practices: Concepts, Terms, and Techniques for Successfully Planning, Implementing and Managing. New York: Emereo Pty Ltd. 2009.
Rittinghouse, John. Cloud Computing: A Practical Approach. Boston: McGraw-Hill Publishers, 2009. Print.
Sun. Introduction to cloud computing architecture. White paper 1st edition. 2009. Web.
Techsuperb. Understanding Cloud computing – advantages and risks. 2009. Web.
Vaquero, Luis, M., Caceres, Juan, and Lindner, Maik. A Break in the Clouds: Towards a cloud Definition. Computer Communication Review, 39.1(2008).
Vogels, Wilson. A Head in the Clouds—The Power of Infrastructure as a Service. In First workshop on Cloud Computing and in Applications, 42. 3(2008).
Weiss, Aaron. Cloud Computing: PC Functions Move onto the web. Computing in the Clouds, 11.4(2007).
Genetic Screening And Testing
Introduction
Every expectant mother wants to be sure that her baby is healthy, well-developed, and complications do not threaten the pregnancy. Modern technologies allow detecting pathologies at an early stage of pregnancy, which makes it possible to make the necessary decisions promptly. Genetic screening (prenatal screening) is a maternally and fetal-safe screening that can accurately identify the threat of genetic diseases and the risks of pregnancy complications. For parents it is highly important to ensure health safety for them and their children so that early diagnosis should regard as a preventive tool, which assists both parents and clinicians to secure fetus health.
Concept Definition
The provided descriptive report explains how genetic screening and testing assists clinicians in determining cognitive disabilities in babies. This technique is more known as a prenatal test, which is highly essential for pregnant women. Genetic screening (prenatal screening) is a complex of diagnostic studies in the production of anomalies, which is developed and identified (brand management) of the pathology of the fetus. Genetic screening uses UZD screening (11-13 weeks, 16-18 weeks, 21-22 weeks, 30-32 weeks required number) and biochemical screening (consecutive test 13-13 weeks, and triple 16-18 weeks life) (DeThorne and Ceman 61). The double test is a personal association with the duration of plasma protein A-PAPP, chorionic gonadotropin. The triple test determines alpha-fetoprotein, chorionic gonadotropin, unbound estriol. Genetic screening is recommended for all long-term manufacturers of large manufacturers.
Besides, pregnancy screening reveals the possibility of chromosomal abnormalities or congenital disabilities in a future baby. Prenatal testing is performed to eliminate the likelihood of syndromes, like Down, Edwards, Patau, neural tube defects, and other anomalies of the placenta. It should be noted that the most common chromosomal abnormality is Down syndrome.
Such screening is based on the difference between indicators in the blood of a pregnant with chromosomal abnormalities and the blood properties of women who carry a healthy baby. The saturation of the markers is determined by the duration of pregnancy and the condition of the fetus (Shaffer et al. 502). Consequently, screening is assigned at a fixed interval when the risks are assessed as accurately as possible. Mostly, two such studies are required: the first and second trimesters, a double and a triple test, respectively.
Concept Description
Genetic screening is recommended for pregnant women who fall into the following categories:
- Age over 35;
- Ultrasound of the fetus showed deviation from the norms of development;
- One parent is a carrier of the genetic disease;
- The family already has a child or close relative with a chromosomal disease or congenital disability;
- Not yet aware of the pregnancy, the woman took potent drugs that were not recommended for use by pregnant women, did x-rays or were exposed to any radiation, were in stressful conditions for the body;
- Partners have related relationships (e.g., cousin and sister);
- The family wants to exclude the possibility of having a baby with developmental disorders or chromosomal diseases.
The genetic test allows clinicians to determine and identify these specific deviations in DNA:
- Trisomy (21 pairs of chromosomes (Down syndrome);
- The risk of trisomy on the 13th chromosome (Patau syndrome);
- Trisomy on the 18th pair of chromosomes (Edwards syndrome);
- Cornelia de Lange Syndrome;
- Smith-Lemley-Opitz syndrome;
- Shereshevsky-Turner syndrome;
- Triploidy of maternal origin;
- Nerve tube defects (anencephaly, spina bifida);
- Omphalocele (umbilical cord hernia).
It should be noted that most of these diseases have a significant impact on the quality of life of both the child and the whole family. Some of them can be corrected, for example, spina bifida in the mildest forms may not require treatment at all; some defects can be eliminated surgically, but several cases, even after surgery, will have negative consequences (Shaffer et al. 503). It is a mistake to think that chromosomal abnormalities are rare cases that number in the tens of thousands of newborns. Some anomalies are indeed infrequent, but Down syndrome occurs in a single case in 600-800 births. At the same time, even families who are not at any risk can lose in the genetic lottery.
Genetic Screening Technique
Prenatal screening is performed twice – in the first and second trimesters of pregnancy. As noted earlier, in addition to assessing the risks of genetic pathologies, it allows clinicians to predict possible complications in pregnancy, such as late toxicosis, placental insufficiency, intrauterine hypoxia, preterm delivery. The first trimester, 11-13 weeks of pregnancy, should be screened, as the part of initial genetic screening. At this time, the activity of the embryo is still low, but the work of the placenta is already very active, so much information is provided by its indicators – free HCG (human chorionic gonadotropin) and PAPP-A (pregnancy-related plasma protein A) (DeThorne and Ceman 65). An indicator that does not meet the deadline determines a delay in intrauterine development or signal the risk of hypertensive conditions.
The first screening is combined with an ultrasound examination to assess compliance of fetal development with the standards. Pregnancy trimester one screening should be 11 to 13 weeks. Nonetheless, clinicians suggest that it is preferably 12 weeks, as one of the most appropriate time for reliable results. It detects the likelihood of pathologies, such as defect of the anterior abdominal wall, nerve tube, specific genetic pathological changes. Besides, the “double test” indicates the threat of abortion, fetoplacental insufficiency. The conclusion of the trimester one screening is based on ultrasound and biochemical blood analysis. Free β-HCG and PAPP-A are also calculated. At 10-12 weeks of pregnancy, the HCG level reaches its highest level and then decreases. The PAPP-A study should be performed at week 12 (Franceschini et al. 573). After 14 weeks, it is not informative. It is also necessary to measure the nasal bones, blood flow in the venous duct, to exclude regurgitation on the tricuspid valve. Trimester one biochemical screening gives a 90% chance of detecting Down Syndrome in combination with ultrasound markers. If necessary, it is recommended to calculate the individual risk of having a baby with chromosomal abnormalities.
The second prenatal screening is conducted from 14 to 20 weeks, preferably in 16-18 weeks, since many pathologies can be formed during this period. It is current for weeks 16-20 (preferably 17-18). Screening of the second trimester consists of a detailed ultrasound, biochemical analysis of blood from a vein (HCG, AFP, and EC), so to speak, “triple test.” Ultrasound examination in the second trimester confirms the development and determines the size of the fetus, eliminates anomalies of development of major organs and systems, evaluates amniotic fluid, length of the nasal bone, thigh and shoulder bones. The estimation of the increase or fall in hCG is registered, as in the first trimester. AFP is most informative at 17-18 weeks (Légaré et al.). The level of free estriol (E3) demonstrates the functioning of the fetoplacental system. Its fall of more than 40% indicates a threat of miscarriage. A complex combination of indicators is estimated – placental HCG, fetal AFP (alpha-fetoprotein), and free estriol, which characterize the state of the placenta, fetus, and the body of a woman. The second screening provides detailed information on the operation of the fetoplacental complex.
Not only does screening reveal the risk of chromosomal abnormalities in a baby, but it can also diagnose and prescribe appropriate therapy for various pregnancy complications. Screening conclusions are provided as a report with test data and generally accepted medical standards. The results provide indicators of the probable risk of trisomy in the form of a ratio of -1: 14000; therefore, 1 case for 14000 or more pregnancies (Shaffer et al. 505). Given all the data, their definition and interpretation, the gynecologist recommends to the woman additional consultation with the geneticist and undergo an advanced independent analysis.
Conclusion
Prevention of complications identified through screening helps prevent many complications of the last trimester of pregnancy and possibly save the life and health of the baby. It is important to remember that a risk group is not a diagnosis. Pregnant women who have been excluded from screening at the risk of pathologies do not require further studies. However, at her discretion, the woman decides to have her screenings or not. The most accurate method of diagnosis of chromosomal pathology is the analysis of fetal chromosomes, which gives an accurate diagnosis. To confirm (or refute), additional tests are needed – amniocentesis or chorionic villus biopsy.
Works Cited
DeThorne, Laura S., and Stephanie Ceman. “Genetic Testing and Autism: Tutorial for Communication Sciences and Disorders”. Journal of Communication Disorders, vol 74, 2018, pp. 61-73. Elsevier BV.
Franceschini, Nora et al. “Genetic Testing in Clinical Settings”. American Journal of Kidney Diseases, vol 72, no. 4, 2018, pp. 569-581. Elsevier BV.
Légaré, France et al. “Improving Decision Making About Genetic Testing in The Clinic: An Overview of Effective Knowledge Translation Interventions”. PLOS ONE, vol 11, no. 3, 2016, p. e0150123. Public Library of Science (Plos).
Shaffer, Lisa G. et al. “Quality Assurance Checklist and Additional Considerations for Canine Clinical Genetic Testing Laboratories: A Follow-Up to The Published Standards and Guidelines”. Human Genetics, vol 138, no. 5, 2019, pp. 501-508. Springer Science and Business Media LLC.
Management Solution Needed For The Metropolitan Police Service
A dedicated “solution” is needed for MPS staff frequently who have multiple identities associated with their different job roles
MPS stands for Metropolitan Police Service which is an organization in the United Kingdom to keep the law and order in the country. “Today, the Metropolitan Police Service employs 31,000 officers, 14,000 police staff, 414 traffic wardens and 4,000 Police Community Support Officers (PCSOs) as well as being supported by over 2,500 volunteer police officers in the Metropolitan Special Constabulary (MSC) and its Employer Supported Policing (ESP) program.” (MPS publication scheme n.d.).
The organization is a vast one with a huge band of officers as a whole. So, managing the entire cadets has become a challenge in front of MPS. Since the information within the group is of greater secrecy, it becomes necessary to choose a highly authenticated method to access those secure data. All employees are not allowed to access all data; there are limits for each employee. The limit can be applied to the employees using the identity concept.
A single identity check can filter out the least misuse of data, but numerous identities for the access of data can be acceptable in any case. The MPS uses four identity checks for the security of their confidential information within the organization. The identities of each officer are to be managed well and with zero errors. Thus, there comes a need for a dedicated solution for the management of identities. The solution should take care of the utmost security of the data and manage the entire identities of each employee. The management of identities claims the activities of the solution to give identities to new employees, refreshing the identities of existing employees and canceling out the identities of the employees who have quit the domain. “Identity management is a discipline which encompasses all of the tasks required to create, manage, and delete user identities in a computing environment.” (What is identity management? 2009).
Handling the protected records of an organization always requires a much efficient method. In MPS it becomes necessary for a management tool that can work with all the four identities along with the biological identity in an effectual manner with the least percent of leakage of the safety of the organization.
While entering a new employee to the wing, the identity management tool should provide an account for him along with tagging various necessary powers to it. Also, the account is given stipulation to the sectors where the account holder is authorized to visit. An existing employee account has to undergo a few steps for keeping it more secure. Forgetting passwords is met by resetting the existing passwords. Again, regular change of secret codes is essential for making the system security to the maximum extent. Upon dismissal of an employee from the domain, the whole access of the person should be repealed from the entire system. Also, the full details of the account and codes along with the database of the person should be cleared out and replaced by the newer entry in the domain system.
What role might biometric techniques play in strengthening user authentication?
As a step towards the solution execution, the identification of an efficient biometric technique to be adopted by MPS will be carried out. “One of the most dangerous security threats is impersonation, in which somebody claims to be somebody else. The security services that counter this threat are identification and authentication.” (Polemi 2000).
The identity should be disclosed with proper proof to the machine protecting the confidential data so that he will be allowed to access it. Prior to all analysis, the term biometrics is to be defined. “The term biometrics applies to a broad range of electronic techniques that employ the physical characteristics of human beings as a means of authentication.” (What are biometric techniques? 2009).
The field of biometrics becomes significant due to the fact that in the contemporary world of competency and pace, the security of communication and data turns out to be of the highest precedence. In the case of MPS, the people accessing the data should be the concerned ones themselves. So, the application of biometric techniques is accepted to ensure that the right person is accessing the right data. The physical characters of a person cannot be disguised to cheat the sensors.
The efficiency of various biometric techniques varies considerably; so the selection of the technique for a particular application is done after proper analysis of various methods available. “Biometric accuracy is measured in two ways; the rate of false acceptance (an impostor is accepted as a match – Type 1 error) and the rate of false rejects (a legitimate match is denied – Type 2 error).” (Ruggles 1996).
The analysis of accuracy should be carried out by MPS to eliminate the wrong selection of biometric techniques. The system which accepts a lesser number of unauthorized people and rejects a lower number of authorized ones is the most efficient one. But, the usage of a single biometric technique will never end up with maximum efficiency, so more than one technique is used at a time for the systems at MPS.
MPS uses different biometric techniques to achieve recognition of the authorized access to secure data. “Recognition is defined as a process involving perception and associating the resulting information with one or a combination of more than one of its memory contents.” (Jain 1999, p.3).
The sensors will sense the required physical or physiological factor and compare it with the pre-recorded data. There may be one or more matches, but the usage of more than one biometric analysis can help in finding the right output. Mismatch in anyone will make the person unable to access.
There are many biometric techniques available under different categories, namely physiological, behavioral and new biometric techniques.
The fingerprint, face recognition, voice recognition, etc are mostly used in the authentication process. The acceptance of the methods is always identified by the requirements of the organization. In the case of MPS, forceful access to secure data is to be eliminated effectively. So, the methods which are able to be affected by stress should be installed. Speech recognition and face recognition can be efficient in such cases.
The usage of this technique relies on the fact that the method is less prone to cheating. “Speaker verification can make a security system less vulnerable to violation and more easily accessible from remote sites.” (Polemi 1997, p.28).
It is identified as the secure one because the imitations are easily understood by the sensors and even the forced or stressful trial for access will be denied effectually. An organization like MPS can always advantageously use this method. Another method of authentication that can be used for secure data access is the face recognition method. “Face recognition is a very complex form of pattern recognition. It consists of classifying highly ambiguous input signals, with multiple dimensions and matching them with the know ’signals’. Classifying a pattern with high dimensions requires a restrictively large number of training samples. This is known as the ‘Curse of Dimensionality.” (Tamma 2002, p.1).
The curse dimensionality is responsible for the effective recognition of a person by his face. This becomes appropriate for MPS as no face changes are allowed for the access, and any minute change will make the system deny the person from accessing the data.
Distinguish between a biological identity and multiple digital identities
Computer crimes are identified worldwide nowadays. In the words of Janet Williams, “Electronic crime is a growing phenomenon of the twenty-first century and has the potential to affect us all. This Unit will provide a law enforcement solution and work towards limiting the impact of this crime on society.” (Williams n.d.). It identifies the prevalence of computer crime. Janet Williams is the Deputy Assistant Commissioner of a subordinate organization of MPS named as Specialist Crime Directorate (SCD).
The biological identity is a unique characteristic of living organisms. The fact that “even individuals belonging to the same species have many different characteristics; for example, nose or ear shape, hair color, eye color, etc. Thus we can say that each organism has a biological identity of its own.” (The biological identity of living organism n.d.).
The biological identity is the one that cannot be imitated at all. The biological identity is due to the genes inside the cells of the organism which cannot be altered by any means to disguise. The DNA pattern verification, identification of sweat pores, recognition of the shape of the ears, detection of the smell of the body, etc. are all identified as the biological identity analysis techniques. “Because useful biometric data ought to remain fixed during a person’s lifetime, such information may have to be considered as personal property in the legal sense.” (Huth n.d., p.8).
The identities are all checked by cross-checking with the personal properties defined earlier. The digital identities can get confused many a time due to the wrong results of comparison of the sensed and stored data. The multiple digital identities are used to eliminate the confusion resulting due to false assessments. The comparison of biological with the digital identity can be returned with the truth that the biological method is truly secure, but it is not yet practical. The testing of the gene for a person willing to access data is impossible at the present situation but the usage of the same for identifying the criminals is of much significance. Though digital identification is prone to errors, multiple usages reduce the percent of faults, and even these turn out to be completely practical.
Should the Police “trust” cryptographic techniques (such as RSA) that are used as an integral part of PKI
The fact that secure messaging within the MPS is critical makes the exploration for an encrypted secure system inevitable. As an example, the analysis of the SSL becomes significant. “SSL (Secure Socket Layer) is a protocol layer that exists between the Network Layer and Application layer.” (Implementing and using SSL to secure HTTP traffic 2008, Para. 2).
SSL turns out to be an efficient layer for a secure transfer of data between the network and application layers. The protocol for this is applicable for almost all types of data transfer including HTTP, IMAP, POP, LDAP, etc. The figure illustrated below describes the SSL application.
The SSL protocol amalgamated with encryption technology will add many colors to the security of data communication for MPS.
The data communication between the people within the organization of MPS should be secure and inaccessible by a person other than the sender and the receiver. The most widely used technique called encryption can be used effectively along with SSL applications. “Encryption is the conversion of data into a form, called a ciphertext that cannot be easily understood by unauthorized people. Decryption is the process of converting encrypted data back into its original form, so it can be understood.” (Encryption 2009).
The entire conversation sent over the communication line will be converted to another by using a key so that a person who has no access to that key cannot decrypt the sent ciphertext to tap the information. The entire security of the data is depicted on the key used for the encryption process.
The keys used for encryption are of two types, namely secret key and public key. The encryption carried out is named after the key used for the encryption process. “In secret-key encryption, also referred to as symmetric encryption, the same key is used for both encryption and decryption. In public-key encryption, also referred to as asymmetric encryption, each user has a public key and a private key. Encryption is performed with the public key while decryption is done with the private key.” (Encryption keys: basic concepts 2008).
The secret key encryption needs the sender and receiver to know the key used for encryption i.e. transmission of the key becomes necessary which is not secure in the case of organizations like MPS. But the case of public-key encryption is different because the sender’s key is always known while the decryption key is unknown to the sender. So, both the people will be having two keys with them and use the private key for decryption and the public key for encryption.
Public key encryption is widely used due to the fact that the key for decryption is kept confidential and is not needed to be sent over transmission lines. “Public-key cryptography supports security mechanisms such as confidentiality, integrity, authentication, and non-repudiation.” (Public key management 2009).
The advantages of the Public Key Cryptography make it the more acceptable and prominent method of encryption. The reliability exhibited, percentage of security offered on the transmitted data, and non-disclaimer character of the system all add to its demand in the field of secure data transmission.
The implementation of public-key encryption should be backed by an infrastructure that guides through the entire process of encryption. ”A public key infrastructure (PKI) is a foundation on which other applications, system, and network security components are built.” (Public key management 2009).
The development of efficient infrastructure is the only factor contributing to the success of the encryption process. The entire security system depends on the public key infrastructure formulated. “A PKI is an essential component of an overall security strategy that must work in concert with other security mechanisms, business practices, and risk management efforts.” (Public key management 2009).
The encryption carried out should be based on the infrastructure using the public key. The receiver side will identify the ciphertext and decrypt using the private key at that end thereby retrieving the original data sent.
The usage of PKI is widely accepted due to the authentication it is providing. “The business model used by CAs in PKI ensures that many servers will never have registered certificates — servers that may still be as trustworthy as any other, and for which secure encrypted transactions may be just as critical to the day to day online activities of thousands of people as those that can afford to buy into the CA con game.” (Perrin 2009).
The above quote describes that theoretically, the PKI seems to be unbreakable of course, but the practical aspects can or cannot be the same. But, the way they sigh on the PKI system, the MPS can trust the system without any hesitation. This shows the fact that if a cent percent PKI is developed, the confidence of the code to be unbreakable can be made secure.
The RSA is an algorithm used to generate keys for encryption and decryption that can be used with both encryption methods. The method describes using the factorization larger values. The algorithm can be summarized as:
- “n = pq, where p and q are distinct primes.
- phi, φ = (p-1)(q-1)
- e < n such that gcd(e, phi)=1
- d = e-1 mod phi.
- c = me mod n, 1<m<n.
- m = cd mod n.” (RSA algorithm: summary of RSA 2009).
The above algorithm is executed using large integers which will give the least chance of intrusion. Thus, it is secure to trust and use the algorithm in the data communication system in MPS.
Reference List
Encryption 2009, Search Security.com Definitions.
Encryption keys: basic concepts 2008, Fine Crypt: professional Encryption Tool. Web.
Huth, MRA n.d., Secure communicating systems: design, analysis, and implementation: chapter1: secure communication in modern information societies, Cambridge University Press.
Implementing and using SSL to secure HTTP traffic 2008, Linux online. Web.
Jain, LC 1999, Intelligent biometric techniques in fingerprint and face recognition, CRC Press.
MPS publication scheme n.d., Metropolitan Police: Working Together for a Safer London. Web.
Perrin, C 2009, IT security: encryption: the TLS/SSL certifying authority system is a scam, Tech Republic. Web.
Polemi, D 1997, Final report: “biometric techniques: review and evaluation of biometric techniques for identification and authentication, including an appraisal of the areas where they are most applicable.” Web.
Polemi, D 2000, Review and evaluation of biometric techniques for identification and authentication-final report: summary of report, Cordis Archive. Web.
Public key management: solution 2009, Select. Web.
RSA algorithm: summary of RSA 2009, DI Management.
Ruggles, T 1996, Comparison of biometric techniques: biometric accuracy. Web.
Tamma, S 2002, Face recognition techniques: introduction. Web.
The biological identity of living organism: what do we call “biological identity” n.d., Think Quest. Web.
What are biometric techniques? 2009, RSA Laboratories. Web.
What is identity management? 2009, Tech FAQ. Web.
Williams, J n.d., PCeU: police central e-crime unit, Metropolitan Police: Working Together for a Safer London. Web.