Sample Undergraduate Cloud Computing Essay
Here is a sample that showcases why we are one of the world’s leading academic writing firms. This assignment was created by one of our expert academic writers and demonstrated the highest academic quality. Place your order today to achieve academic greatness.
Cloud Computing Platforms for Machine Learning
In cloud computing, ‘cloud’ primarily refers to the Internet. Thus, by simplest definition, cloud computing is the storage and access of data on the Internet instead of a computer’s hard drive. This is actually how cloud computing is different from normal computing. In cloud computing, hard drives are not used, and the data is stored, and programs are running directly on the cloud.
Although the computer industry has been running on hard drive storage for a long and some would argue that it’s still superior to cloud computing in terms of speed, which is true, cloud computing still has its valuable advantages. Dedicated network-attached storage (NAS) does not fall in the category of cloud either. Storing data on a home or office network does not count as utilizing the cloud.
To consider computing as ‘cloud computing,’ an individual or business setup must have to access their data or programs on the Internet, or at the minimum, have that data synchronized with other figures over the Web.
Online computing is serviceable for both individual end-users and big businesses. However, business users may know all about the other side of the connection in the case of a large setup. In contrast, as an individual user, one never knows what kind of massive data processing is being done on the other end (Buyya et al., 2009).
In simplest terms, Cloud computing, or cloud, is the on-demand service provision of computing resources, including applications to data resources on a pay-for-use basis over the Internet.
Some of the broader categories of the cloud computing services are cloud-based applications known as Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), public cloud, private cloud, and hybrid clouds (AuYoung et al., 2004).
SaaS benefits include easy access to innovative business apps via simple sign-up; any connected computer can be used to access applications and data; data storage in the cloud makes its security hardware independent in SaaS (Yeo and Buyya, 2007).
Whereas PaaS allows developing applications faster for the market, installing different web applications to the cloud in no time, and moderates the complication with middleware as a service. IaaS saves the hardware investment, supports dynamic workloads through on-demand infrastructure scaling, and ensures on-demand availability of flexible and innovative services.
Companies that provide speedy access over a public network to provide inexpensive computing assets usually own and operate a public cloud. Thus, the public cloud facilitates the users by owning and managing hardware, software, and supporting infrastructure. Users don’t need to purchase them.
The public cloud supports state-of-the-art SaaS business apps for various applications ranging from resource management to data analysis. Scalable and flexible IaaS can be provided for computing and storage through the public cloud on concise notice. For cloud-based application, development, and deployment environments, the public cloud offers a potent PaaS.
The cloud provided and operated for an individual organization is called a private cloud. However, it can be managed internally or by a third party and hosted externally or internally. Private clouds offer more control of resources and take advantage of cloud efficiencies.
In a private cloud, governance and security are designed according to the company’s specifications. It provides exceedingly automated control of resource pools for everything from computing proficiency to storage, analytics, and middleware. The combination of private cloud foundations and utilization of public cloud services is termed a hybrid cloud.
Generally, private could hardly work in isolation from the company’s public cloud and IT resources. Thus, private clouds, data centers, and public clouds form a hybrid cloud to permit corporations to retain critical applications and delicate data in a traditional data centre environment or private cloud.
The hybrid cloud enables the convenience of data, apps, and services and more organization model choices. A few of the salient features of economical cloud computing are affordability, scalability, security, and complete virtual behaviour (Fox et al., 2009).
Implications, Research & Developments
The higher advancements in information and communication technology (ICT) over the past few decades indicates a broader vision where computing will be added as a new 5th utility in the group of existing four basic utilities including, electricity, gas, water, and telephony (Buyya et al., 2009).
Cloud computing is thus the latest of all the versions of computing introduced in this domain. Presently, it is corporate to access data autonomously without reference to the mentioned hosting infrastructure. The facilities are provided through installed data centres, which are maintained and monitored full-time by content providers.
Providers such as Google, IBM, Amazon, Microsoft, Salesforce, and Sun Microsystems have started to create firsthand data centres for hosting cloud computing applications in numerous settings worldwide to deliver redundancy and ensure steadfastness in case of site failures (Yeo et al., 2010).
The developments in microprocessor and software technologies have opened new avenues for increasing commodity hardware’s ability to allow running applications within virtual machines (VMs). The apps within the VMs remain isolated from both other VMs and underlying hardware (Casola et al., 2012).
Some famous commercial examples include Amazon EC2 (Elastic Compute Cloud), Microsoft Windows Azure platform, Google App Engine, Sun network.com (Sun Grid), and Aneka. Amazon EC2 allows Linux-based applications to be run on the cloud through a virtual computing environment.
The user can create Amazon Machine Image (AMI) through libraries, applications, and data-connected configuration settings. The users can use the built-in global libraries.
For that matter, users are required to upload selected or designed AMIs to Amazon Simple Storage Service S3. Amazon S3 users are charged on all kinds of data transfers upload/download while EC2 charges than on instance, until alive. Google app engine entertains python programming language to design web applications provided for the users online.
Application Programming Interfaces (APIs) are also supported for the datastore, URL fetch, image manipulation, Google Accounts, and email services. For now, the google app store is free to use, along with 500MB of storage and around 5 million page views per month.
The primary objective of Microsoft Azure is to facilitate users with integrated development, hosting, and control Cloud computing setting, which helps the software developers to create conveniently, manage, host, and scale both Web and non-web applications through Microsoft data centres.
Microsoft Azure supports a wide variety of proprietary development protocols and tools, consisting of Microsoft .NET Services, Live Services, Microsoft SharePoint Services, Microsoft SQL Services, and Microsoft Dynamics CRM Services. Azure also assists software developers by providing SOAP and REST Web APIs to interface between Microsoft or non-Microsoft tools and technologies. Sun network.com (Sun Grid) allows the consumer to run Java, Solaris OS, FORTRAN and C/C++, based applications.
Initially, the user is supposed to build and debug his applications and runtime scripts in a local development environment constructed to be similar to the Sun Grid. A bundled zip archive, comprising all the relevant libraries, executable binaries, scripts, and input data, is needed to be built after the first step and uploaded to Sun Grid.
Lastly, applications can be monitored and executed using the Sun Grid web portal or API. Once the application process is completed, the user can download the execution outcomes to broadcast his local development environment. Aneka is a service-oriented resource management platform developed using .Net programming and commercialized through Manjrasoft.
Numerous application models, security solutions, and communication protocols are provided with persistence in Aneka. The preferred selection is changeable at any instance but without affecting the existing Aneka ecosystem. The user can specifQoSoS requirements in Aneka as it provides SLA support. Thus users will be able to customize budgets and deadlines (Brandic et al., 2007).
The user can access the Aneka Cloud remotely through the Gridbus broker. The Gridbus broker feature also empowers the user to negotiate and agree upon the QoS that the provider needs to allow.
A class of artificial intelligence that makes computers capable of learning without being explicitly programmed is Machine Learning. The objective of machine learning involves the significant improvement of platforms that can teach themselves to grow and modify when exposed to new data.
The data centres or cloud is majorly playing its part in providing ‘big data to improve such intelligent programs’ accuracy but certainly not limited to it. MapReduce has been the pioneer model in computing where clusters of unreliable machines are used to execute data-parallel-computations by the systems that provide locality-aware scheduling, load balancing, and fault tolerance automatically (Dean and Ghemawat, 2008).
Later came the systems such as Dryad and Map-Reduce-Merge to generalize data types (Isard et al., 2007; Yang et al., 2007). These schemes attain fault tolerance and scalability by providing a programming model where the operator builds acrylic data flow graphs to pass input data through a set of operators.
Thus, the underlying systems are empowered to manage schedules independently and handle faults without user intervention. For machine learning applications cloud provides iterative jobs and interactive analytics (Fox et al., 2009).
Various SQL-based interfaces such as Hive and Pig are used for this purpose. Generally, it is more convenient for the users to load relevant datasets of their interest in memory across many machines. In Hadoop, each query is taken and processed as an independent MapReduce job to incur significant latency and directly read data from the disk. (Zaharia et al., 2010 have proposed a new platform called Spark, which retains the fault tolerance and scalability of MapReduce in the condition where data sets are used multiple times in parallel applications.
Spark provides a distinct feature of resilient distributed datasets (RDD). An RDD can be cached by the memory’s users across machines retrievable if the partition is lost and can be reused in multiple MapReduce-like parallel functions. Scala is a statistically typed, high-level programming language for Java virtual machines and implemented Spark.
Some of the new and fundamental applications available in cloud computing are mobile interactive applications, parallel batch processing, the rise of analytics, the extension of compute-intensive desktop applications, and “Earthbound” applications (Casola et al., 2012). Cloud computing is providing promising solutions in computing paradigms.
Clouds are typically designed to facilitate external users with services; they are supposed to be sharing their resources and capabilities. Some of the challenges included are higher-level virtualization of the cloud billing units would go high for the end-users.
Some of the difficulties for business and intelligent market included the automated provision of service, data lock-in, data auditability and confidentiality, unsolved data transfer bottlenecks, performance unpredictability, scalable storage, bugs in large scale distributed systems, quick scaling, reputation fate sharing, Software and licensing (Barham, 2003).
Virtual machine migration, energy management, server consolidation, traffic analysis and management, data management and storage technologies, the novelty of cloud architectures, and Software frameworks pose some of the additional challenges in this area (IBM, 2008).
Although cloud computing has flourished in no time despite the challenges, the application domains and current technology still have a margin to integrate them with cloud computing. The vision of trading the services through global cloud exchange is still quite broad, with many opportunities to explore more (Casola et al., 2012).
Machine learning algorithms can provide more accurate learning results providing larger data sets. The data sets are difficult to handle without storage and services (Buyya et al., 2009). For companies like Google, where data has to be evaluated and analyzed on runtime, the amount of data coming every minute is too large, cloud computing provides a solution for extensive data handling.
The resource adds the cost, primarily when platforms like Amazon EC2, Microsoft Windows Azure platform, Google App Engine, Sun network.com (Sun Grid), and Aneka are used. They are gaining popularity for computing applications.
AuYoung, A., Chun, B., Snoeren, A. and Vahdat, A., 2004, October. Resource allocation in federated distributed computing infrastructures. In Proceedings of the 1st Workshop on Operating System and Architectural Support for the On-demand IT InfraStructure (Vol. 9).
Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I. and Warfield, A., 2003, October. Xen and the art of virtualization. In ACM SIGOPS Operating Systems Review (Vol. 37, No. 5, pp. 164-177). ACM.
Brandic, I., Pllana, S. and Benkner, S., 2008. Specification, planning, and execution of QoS‐aware Grid workflows within the Amadeus environment. Concurrency and Computation: Practice and Experience, 20(4), pp.331-345.
Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J. and Brandic, I., 2009. Cloud computing and emerging I.T. platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future generation computer systems, 25(6), pp. 599-616.
Casola, V., Cuomo, A., Villano, U. and Rak, M., 2012. Access control in federated clouds: The cloud grid case study. Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice. IGI Global, pp.395-417.
Dean, J. and Ghemawat, S., 2008. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1), pp. 107-113.
Fox, A., Griffith, R., Joseph, A., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., and Stoica, I., 2009. Above the clouds: A Berkeley view of cloud computing. Dept. Electrical Eng. and Computer Sciences, University of California, Berkeley, Rep. UCB/EECS, 28(13), p. 2009. IBM, E.U. Launch RESERVOIR Research Initiative for Cloud Computing, I.T. News Online — Feb 7, 2008.
Isard, M., Budiu, M., Yu, Y., Birrell, A., and Fetterly, D., 2007, March. Dryad: distributed data-parallel programs from sequential building blocks. In ACM SIGOPS Operating Systems Review (Vol. 41, No. 3, pp. 59-72). ACM.
Jin, C. and Buyya, R., 2009, August. Mapreduce programming model for. Net-based cloud computing. In European Conference on Parallel Processing (pp. 417-428). Springer Berlin Heidelberg.
Low, Y., Bickson, D., Gonzalez, J., Guestrin, C., Kyrola, A., and Hellerstein, J.M., 2012. Distributed GraphLab: a framework for machine learning and data mining in the cloud. Proceedings of the VLDB Endowment, 5(8), pp. 716-727.
Yang, H.C., Dasdan, A., Hsiao, R.L. and Parker, D.S., 2007, June. Map-reduce-merge: simplified relational data processing on large clusters. In Proceedings of the 2007 ACM SIGMOD international conference on Management of data (pp. 1029-1040). ACM.
Yeo, C.S. and Buyya, R., 2007, March. Integrated risk analysis for a commercial computing service. In 2007 IEEE International Parallel and Distributed Processing Symposium (pp. 1-10). IEEE.
Yeo, C.S., Venugopal, S., Chu, X. and Buyya, R., 2010. Autonomic metered pricing for a utility computing service. Future Generation Computer Systems, 26(8), pp.1368-1380.
Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S. and Stoica, I., 2010. Spark: cluster computing with working sets. HotCloud, 10, pp.10-10.
Frequently Asked Questions
Tips for writing an excellent undergraduate essay:
- Understand the prompt fully.
- Conduct thorough research.
- Create a clear thesis statement.
- Organise with an introduction, body, conclusion.
- Provide evidence and examples.
- Revise for clarity and coherence.
- Proofread for errors.