African Americans’ Issues In The Healthcare Sector Essay Example

Brief History of African Americans In the United States

Portuguese and Spanish explored America with Africans’ help, leading some to Mississippi. African American history started in early 1619 when they worked as servants for a specific white employer. Slave traders from the Portuguese kidnapped the black people and used them for forced labor since they held low status in society and found it hard to use other people. The majority of black people’s origin was the western coast of Africa. However, African Americans had the right to life, which led to the hanging of some whites. In general, regions, customs, and family structures were formed by African Americans with minimum interference from their masters, whose most significant concerns were farm outputs.

Behavioral Risk Factors and Common Diseases that African Americans Experience

Lack of physical exercise is one of the significant behavior factors among African Americans. Therefore, a lack of exercise may lead to a person suffering from cardiac arrest. Exercising helps the blood sacculation in the body, thus reducing the heart diseases that might lead to heart failure. Some African Americans suffer from obesity which is brought about by their daily lifestyle. Obese people are found to suffer a significant risk of suffering a stroke, resulting in disability or even death (Carnethon 394). In addition to that, stress can significantly affect a person to the point where they suffer from high blood pressure. One-third of the African Americans suffer from hypertension, a relatively large number of people.

Why There Is a General Mistrust by African Americans in the Medical System

The African Americans feared that their race could be used for testing new medicine in the market without their consent. They did not believe that a detailed explanation of the importance of participating in the clinical studies would be provided by their doctors (Lee 17). The above has recently been witnessed when there was a need to test the Covid-19 vaccine when test subjects were afraid of volunteering. Therefore, confidence should be created in the health sector to promote a healthy population.

Works Cited

Carnethon, Mercedes R., et al. “Cardiovascular Health in African Americans: A Scientific Statement from the American Heart Association.” Circulation, vol. 136, no. 21, 2017, 393-423. Web.

Lee, Marvin J. H., et al. “Overcoming the Legacy of Mistrust: African Americans’ Mistrust of Medical Profession.” Journal of Healthcare Ethics and Administration, vol. 4, no.1, 2018, pp. 16-40. Web.

Modern Operating System: Concept And Design

Introduction

Modern computers for general use have an operating system such as UNIX, Linux, Microsoft Windows, and Mac OS that run other application software or programs. An operating system (OS) manages all input and output functionalities within a computer system. The OS controls different users, networking, printing, and memory and file management by conveying data to the screen, printer, and other hardware devices connected to the computer (Laudon, & Laudon, 1997, p.34). Because of the difference in computer construction, the input, and output commands of each system varies. An OS consists of many compact programs controlled by the core or kernel. The kernel is the smallest unit of an operating system that allows users to access the application programs and other systems of the computer. Besides the kernel, operating systems have additional tools that display programs and manage user interface and utility programs for file management and configuration of the OS.

Essentially, most operating systems use multiprogramming schemes that manage different jobs for maximum performance of the computer system. At any given time, the operating system kernel manages many processes including user processes such as applications and system processes, for instance the accounting process. Thus, an OS kernel is responsible for the management of multiple functions such as “inter-process communication, scheduling the processes within the CPU, creating, and deleting processes” (Milenkovic, 1987, p.157).

Among the contemporary operating systems is the Linux operating system. The basis of this OS is UNIX standards. It comprises of three major systems of code, the kernel, the system libraries, and the system utilities. It runs most efficiently with PC hardware and is compatible with many computer systems. Linux operating system compared to other operating systems offers multiple choices; it is configurable, reliable, and supports networking and internet use.

History of Linux Operating System

An operating system (OS) entails a program that controls the hardware and software of a computer system through the control of the computer memory, management of the input and output devices connected to the computer; file management, processing of instructions and facilitating networking (Milenkovic, 1987, p.162). Early computers lacked operating systems for managing software and hardware. By the 1960s, pressured by the need to maximize CPU performance, commercial software vendors including UNIVAC and Control Data Corporation were able to provide tools such as batch processing systems for scheduling, allocating resources and execution of multiple jobs (Weizer, 1981, p.119). The batch systems involved developing ad hoc programs for each model (Silberschatz, Galvin, & Gagne, 2005, p.213). Several key concepts during the 1960s contributed to the development of the modern operating systems. The development of IBM System, that involved a single OS, the OS/360, intended to replace the batch system. Among the important features of OS/360 was its hard disk memory storage device, the DASD, which allowed easy file management.

Another major development that contributed to the development of modern operating systems was the construct of time-sharing, which could let several users to have virtual access to the machine, since computer resources were expensive at that time. This led to the development of a time-sharing system, the Multics, which formed the basis for earlier operating systems particularly the UNIX (Ritchie, 1984, p.1577). The early microcomputers lacked the capacity for an elaborate operating system, but CP/M was one notable operating system designed for microcomputers that largely based on creating MS-DOS developed by IBM.

In the 1980s, Apple Macintosh Computer developed Mac OS while Microsoft developed the Windows NT in 1999 forming the basis for its subsequent operating systems. Apple released the Mac OS X in 2001, an OS rebuilt on UNIX core. Due to the increasing complexity of the devices incorporated into computers, these devices use embedded OS. Modern OS have Command line interface (CLI) operating systems that operate using the keyboard functionalities for input. Other modern Operating systems use the mouse for input but this is dependent on the CPU system (Laudon, & Laudon, 1997, p.36) However, Linux and BSD operating systems can run all CPUs. From the 1990s, Microsoft Windows and UNIX operating systems such as Linux and Mac OS X have been the choice for personal computers.

Linux is a modern operating system first designed in 1991 as a self-contained kernel by Linus Torvalds based on UNIX standards. Its development involved a collaboration of many developers from all over the world who communicated using the internet. The kernel is the major component of Linux operating system, which is compatible with existing UNIX software (Ritchie, 1984, p.1581). The Linux kernel developed in 1991 was compatible with most Intel processors but had limited support for embedded devices. Linux 1.0 version developed in 1994 had improved features including BSD-compatibility, improved file management systems, support for TCP and IP networking programming, and support for SCSI controllers that allowed quick access to the computer disk. In 1995, Linus developed Linux version 1.2 with features compatible with PC while version 2.0 came about in 1996 with two distinctive features; allowed support for microprocessors and support for much architecture such as Alpha port. It also had improved features such as advanced file and memory management and networking. The Linux networking tools bases on BSD code such as Free BSD and allows improved networking compared to earlier versions.

The Components of Linux Operating system

The basis of the design of Linux Operating system commonly used in servers and PC is UNIX standards. It intended to allow use of a computer system by multiple users, allow multitasking based on the earlier time-sharing configuration, and be portable. The design of UNIX systems has common distinctive concepts; devices that allow communication between multiple users, data storage involve plain text storing and hierarchical system for file management (Silberschatz, Galvin, & Gagne, 2005, p.192). As a result, the Linux file system follows the UNIX networking standards with an aim of improving efficiency and speed of the systems. Linux design is compliant with the SVR4 UNIX semantics and BSD codes and POSIX requirements. The four major components of Linux operating system include programs for system management, user utility programs, compilers, and user processes. The four components make up the system shared libraries that are core to Linux OS kernel.

User Processes System management programs User Utility Programs Compilers
Shared system Libraries
Linux Operating System kernel
Kernel modules

All UNIX implementations are composed of three main systems of codes: the kernel, the shared system libraries, and the system utilities. Similarly, Linux, which bases on UNIX design principles, has the three code systems. The kernel provides the required operating system abstractions while the kernel code allows the user to access the computer hardware and software. This code together with the OS data structures occur at a single address within the OS. The system libraries on the other hand, provide standard functionalities that allow the interaction between the kernel and the operating system and are independent of the kernel code. The purpose of the system utilities is to perform particular tasks as per the requirements of the various user applications or programs.

The kernel modules refer to the functionalities of the kernel code that can be loadable or unloadable independent of the other functionalities of the kernel. They enhance the implementation of a networking protocol, a file system, or any embedded device thus allowing the distribution of file systems or device drivers. The Linux kernel modules allow the setting up of a Linux system with minimal kernel or additional inbuilt device drivers.

The Linux module support comprises of three main components: the driver registration, conflict resolution, and module process management, which allow loading of modules into the kernel. Module loading allows the management of the kernel code within the kernel memory and management of the symbols referenced by the kernel modules. The module requester is involved in the management of loading requested and confers with the kernel regarding the status of the loaded module, and unloads it if no longer needed. The driver registration functionality allows kernels to send information to the rest of the kernel regarding the availability of a particular driver. Normally, the kernel maintains registration tables of all available drivers and provides the options for adding or removing particular drivers. Registration tables kept by the kernel include file systems, binary formats, networking protocols and drivers for various devices.

Conflict resolution is a mechanism that allows various drivers to possess a given hardware resources and protects one driver from using resources of another driver. Thus, the conflict resolution module allows fair access to hardware resources by all drivers, prevents new device drivers from affecting the existing drivers by ensuring fair access to hardware resources.

In Linux operating system, the module process management aids in the separation of the creation processes and the implementation of a new program. It comprises of the fork system call that is responsible for creating a new process and execve call, which is responsible for running a new program. Under Linux, three distinct categories of the module processes exist: the identity of the process, the process environment, and the context. The process identity (PID) specifies the module processes for the OS following a call from a particular application. In Linux, every module process must have a personality identifier compatible with UNIX tools (Silberschatz, Galvin, & Gagne, 2005, p.209). The process environment on the other hand consists of argument and environment vectors that allow the customization of the OS based on the module process. The process context comprises of the scheduling context, the file table such as the I/O system, the virtual memory context, and the signal handler table. The process context informs the user of the state of a program at any given time during operation of the system.

The use of signals, limited in number, enhance the achievement of the inter-process communication in Linux. However, the processes running within the kernel mode do not involve the use of these signals but instead involve scheduling states and wait.queque structures. The shared memory feature in Linux operating systems offers a fast way of communication and transfer of data between many processes. However, to allow for kernel synchronization, the shared memory operates with inter-process communication structures. The shared memory functionality on the other hand, provides a back up for the shared memory sections of the kernel. A distinctive feature of the shared memory functionality is its capability to retain memory of their contents even when there are no processes changing their memory into virtual memory.

The basis of the Linux file system is UNIX semantics. The kernel manages the various file systems using an abstraction layer called the virtual file system (VFS), which is composed of two components: a set of micro-systems that define how the object file appears like and a node object, which represent individual file. The file system object consists of the entire file system alongside a software layer for controlling the individual file objects.

Linux Operating System and Security

Linux operating system uses two techniques to protect the kernel and promote the security of the stored files and information. Firstly, normally, the kernel code unlike the other OS systems is not reversible. If an interruption occurs during the execution of a module process, the need_resched alert is set to allow the scheduler to continue running once the control returns to the user mode. Secondly, the kernel through the processor’s control hardware disables other interruptions in critical sections of the kernel, an action that guarantees progress of interrupt service routines without making the shared data resources to be accessible by other users.

Synchronization of the Linux OS prevents performance failures and ensures reliability. Synchronization allows critical kernel sections to run independent of the other critical sections. The synchronization architecture in Linux kernel involves the separation of the interrupt service routines into top and bottom levels, with each half able to run independently. Additionally, Linux possesses pluggable authentication modules (PAM) for authentication of users and is compatible with any system. The PAM relies on the shared library concept of UNIX. The uid and gid are the identifiers used in all UNIX systems including Linux for access control for the owner or group of people. Linux achieves protection in two ways: it adheres to POSIX requirements that involves a saved user identification mechanism, the set-uid mechanism and allows a user to extract a single file and pass it to another server without allowing access to the other file systems.

Linux and Network Support

Among the key functionalities of Linux is networking support. Linux supports most standard internet protocols based on UNIX specifications. It also supports other protocols mostly used on PC internet networks such as IPX and AppleTalk (Tanenbaum, 2001, p.72). The network support by Linux Operating system essentially involves different software: the protocol drivers, socket interface and drivers for the network device (Salus, 1994, p.116). Among the important protocols in Linux operating system, that support networking is the internet protocol suite, which allows routing between different network providers. Additionally, the protocol has an in-built UDP, ICMP and TCP protocols that allow for network support.

Linux and systems compatibility

The development of Linux operating system has generated interest regarding its application due to its high flexibility, low cost relative to other operating systems and compatibility with a wide range of applications. The unique capabilities provided by Linux model include the UNIX operating system with multiple interfaces that are compliant with the POSIX standards. For this reason, other POSIX compliant applications are compatible with Linux with little or no changes to these applications (Salus, 1994, p.112). The Linux standard base (LSB) specifications are concurrent with the POSIX specifications, which permits software and hardware designed according to the POSIX specifications to use Linux and Windows operating systems.

Potential Applications of Linux

Most businesses, particularly small businesses, would find Linux operating system convenient for use compared to other systems because of its relatively low cost in web hosting and high reliability. Businesses prefer a server-operating environment that is secure/reliable, compatible with their hardware and software, and inexpensive. These are the top priorities of small businesses. The portfolio of applications provided by Linux including the increasing and improved number of Linux server operating environments and business applications makes Linux the choice for system for small businesses. The increase in the number of business applications on the Linux platform points to increased adoption of Linux in the commercial sector.

At the client platform, various functionalities that increase ease of use and the security that Linux architecture provides means that it will emerge as the operating system of choice for most people (Salus, 1994, p.109). Due to its high compatibility, Linux provides ideal software for integrating various patches from different vendors, hence reducing costs, and providing excellent modular architecture. Thus, Linux has the potential of becoming the preferred operating system for personal computers. However, the complicated nature of the operating system would derail its adoption. Additionally, because of its administrative and development applications, security and support for many essential applications and networks, Linux operating system will find wide applications in governments.

Competing Technologies

Windows operating system is the major competitor to the Linux operating system. It is the leading system of choice for the majority of PCs and servers such as x86-server hardware. Microsoft Windows operating system is much easier to use with most people being familiar with its applications. Additionally, it has a large number of software programs, games, and utilities available to its users. The Microsoft Windows operating system has many variants including Windows NT, Windows 2000 and XP, Windows 2007 and 2010, which are easy to use, have online help documentation and online versions of Windows (Tanenbaum, 2001, p.70).Comparatively, Windows OS is easier to use than Linux. However, Window OS requires frequent rebooting hence less reliable compared to Linux. Windows has higher number of applications that are compatible with it because of the large number of users and larger support for many device drivers. Most, manufacturers, therefore, are able to support their software products in Windows than in Linux.

Other competing technologies include the various versions of the UNIX operating system such as the UNIX version 5 and PWB/UNIX, IS/1 operating system. Most of the new Microsystems released by UNIX such as System V Release 4, Belenix, AIX, and MarTUx are open source available to users posing a challenge to Linux.

Analysis and Conclusion

Modern operating systems in contrast to the earlier batch systems help in controlling the input and output functionalities of a computer system allowing improved performance of hardware and software programs. The kernel, the first layer of operating system software, allows access to various application programs by the user such as file management, access to disks and other hard ware devices.

An ideal operating system allows multi-tasking and has time-sharing configuration, a common problem experienced with the batch systems. Linux, whose basis is UNIX standards, when compared to other operating systems, is able to support many applications based on its kernel configuration. Linux consists of shared system libraries that define the standards for interaction between the kernel and the OS, the kernel is the micro-system with various codes that allows access to the computer hardware and software and the system libraries that perform program-specific tasks.

Linux has an excellent modular architecture that ensures security of information sharing. It also supports a greater number of device drivers hence compatible with many programs including the server protocol drivers. These two features and the open source of many Linux applications make Linux OS an appropriate system for small businesses, PC, and governments.

Reference List

Laudon, K., & Laudon, J. (1997). Information Systems: A Problem-Solving Approach. Fort Worth, TX: The Dryden Press.

Milenkovic, M. (1987). Operating Systems: Concept and Design. New York: McGraw-Hill.

Ritchie, D. (1984). The Evolution of the UNIX Timesharing System. AT& T Bell Laboratories Technical Journal, 6(2), 1577-1593.

Salus, P. (1994). A Quarter Century of UNIX. Reading, MA: Addison-Wesley.

Silberschatz, A., Galvin, P., & Gagne, G. (2005). Operating Systems Concepts. New Jersey: John Wiley & Sons Co.

Tanenbaum, A. (2001). Modern Operating Systems. Upper Saddle River, NJ: Prentice- Hall, Inc.

Weizer, N. (1981). A History of Operating Systems. Datamation, 6, 119-23.

Issue Of The Sharing Of Finances

The sharing of finances in work relations and among relatives was an economic inefficiency that existed before the invention of mobile transfer services in Kenya. Therefore, M-PESA has resolved this challenge by enhancing money circulation. This mobile money service provider offers a safe, secure, and cheap transfer of funds. For example, immigrant workers in the urban areas can send money to their relatives in rural places. The service is efficient and trustworthy compared to former means, such as using passengers traveling to help make money for people in rural areas. Moreover, the M-PESA has partly resolved unemployment, given that by 2016, 100,744 agents were earning directly from this service. The other economic inefficiency that has been fixed is the uplifting of low-income people in the region. For instance, M-PESA increased general consumption and lifted 2% of Kenyan households (194,000 families) out of poverty. MPESA has resolved several economic inefficiencies by improving livelihoods, enhancing safety in financial transfers, and creating employment.

Since the introduction of mobile money transfer in Kenya, M-PESA is widely used in the region, and many people opt to use it instead of the standard banking system. The provider has a typical price elasticity implying that a change in charges will affect the number of subscribers. The M-PESA applies this principle by allocating higher prices on the category of services transacted by many customers. Concerning the income elasticity of demand, M-PESA charges higher on the services which are mostly patronized by consumers. For example, the fee for withdrawing money from M-PESA through the agent is often more than the fees for transferring money from one user to another. The enterprise also charges low for registered users and higher for those who are unregistered.

There is a need to continually improve the business process to remain competitive and relevant to the needs of people. One of the strategies for Safaricom is to lower the sales charges to attract more customers from poor families. The expenses of adding more users are low because the fixed cost is scaled to accommodate many users at a time. Both the fixed and marginal payments are lowered so that they will be more affordable to the clients. Another strategy will be to integrate different services, including voice and data, for the customers and increase their revenues while attracting new market segments, such as the low-income earners. The company can also consider mergers and acquisitions to enhance its economies of scale. For example, M-PESA can merge with key competitors with similar missions and visions. For instance, there is Airtel money transfer, which is also majoring in performing its services in Kenya and neighboring countries. The acquisition will increase the number of customers and resultantly revenues. The company can, thus, exploit economies of scale by lowering its charges, diversifying its customers, and through acquisition network connections.

In such places where the Safaricom network is not boosted, people will opt for other banking services. Having a stable connection enhances the efficiency and reliability of transactions for the customer. In Kenya, Safaricom has 80% of the market share due to its robust technology. A sound system also implies that the company maximizes its profit by ensuring it has the strongest accessibility even in remote regions. The Equity financial service offers customers both investment consultancy and mobile transfer. This bank also has the highest market share in the banking sector in Kenya. The implication is that it competes for the same customer segment, thus lowering the demand for M-PESA. With the intense rivalry to maximize its profit, M-PESA must adopt a cost leadership strategy. The implication is that MPESA has to lower its transaction fee to be less than that of Equity. Strong leadership is also the key to maximizing profit.

The development of e-float was a strategic move suitable for the Kenyan market, where the majority of people are low-income earners who do not save on standard banking systems. However, the Central Bank of Kenya allowed M-PESA to operate its model under some restrictions to limit fraud. For instance, M-PESA has to give a non-profit organization all the interests it gains from deposit balances. There is also a restriction on the amount of money that a person is allowed to transact a day. The central bank also monitors M-PESA for any money laundering complaints; so far, the company has managed to avoid significant frauds. Another strategy is the use of external auditors to review all the financial details. M-PESA also has to provide its financial reporting openly to build trust with major stakeholders. It is in the interest of the company to avoid the issues of breaching trust so that it maintains a strong brand. The ability of Safaricom to enhance trust, participate in NGO initiatives, and avoid fraudulent cases controls the information asymmetries in financial transactions.

Unlike Kenya, the United States is a developed country where there are better alternatives for money transfers. For instance, even in rural areas, there are easily accessible local banks. The education level of people in the United States is higher. Every adult has the necessary knowledge for opening a bank account, unlike in Kenya, where most people are not educated. The payment system between employers and employees is delivered by hand, usually, without any contract, so many Kenyans do not need bank accounts. Also, some pawnshops can be used for retail at a low commission. In Kenya, before the invention of M-PESA, people used informal bus transportation to send money to their relatives. In some cases, money was sent through the post office, and it would take more than seven days before reception. The United States also has such options as Google Wallets, which are way more efficient.

error: Content is protected !!