UNIT III. LATEST TRENDS AND ISSUES IN INFORMATION TECHNOLOGY 10 Hours INTRODUCTION Advances in information technology offer unprecedented opportunities as well as new challenges in the international exchange of scientific data. Rapid improvements have led to ever greater computational speed, communication bandwidth, and storage capacity at costs within reach of even small—scale users— a trend that appears likely to continue well into the future. Moreover, technical advances in satellites, sensors, robotics, and fiber-optic and wireless telecommunications are extending the range of technologies affecting the acquisition, refinement, analysis, transmission, and sharing of scientific data. In this module, you will examine some of the concerns that rapid changes and growing reliance on information technology have raised with respect to the exchange of scientific data. TOPIC 3.1 ISSUES AND TRENDS IN ICT LEARNING OBJECTIVES At the end of the session, you will be able to: 1. Discourse the different emerging trends and issues in ICT; 2. Explain the status of current trends and issues in ICT; 3. Definite realization on the issues and trends in ICT; ACTIVATING PRIOR LEARNING From the image below, identify specific actions that must be taken to ensure the safety your data. PRESENTATION OF CONTENT SOCIAL ISSUES IN INFORMATION TECHNOLOGY The growth in the availability of affordable computing technology has caused a number of major shifts in the way that society operates. The majority of these have been for the better, with home computers and the internet providing unlimited access to all of the information ever created and discovered by humanity. There are, however, some less positive social issues generated as a direct result of technological advances. In the interests of balance, it is important to analyze these and assess the severity of their impact so that steps can be taken to better understand and combat the negative effects. Communication Breakdown Socializing within a family unit has always been important, as it strengthens the bonds between us and ensures cohesion within the group. But with more and more households owning several computers and numerous portable devices granting access to information and entertainment, some argue that this is leading to a lack of family communication. If each member is engrossed in their laptop, smartphone or tablet each evening, even communal things like watching television are compromised. Meanwhile, you can see whole families who are out to dinner and still staring into a touchscreen rather than talking to one another. And if you’re the one driving to that family dinner and texting while driving, you’re a distracted driver, increasing your risk of crashing, and potentially causing death and injury. Increase your digital wellbeing by allowing technology to improve your life and not to become a distraction to your life and others. Your life and others are more important than technology. Defamation of Character The only means of getting in touch with major corporations or famous people in the public eye prior to the advent of digital communication was via a stiffly written letter. This was, of course, accessible only to the intended recipient and thus a very private way for the disgruntled to vent their spleen. But first message boards and now social media services like Facebook and Twitter are being used to defame people and businesses in an intrinsically public manner. This has led to arrests, lawsuits and the threat of placing stricter controls over what can and cannot be posted to such services. It has also caused heartache and woe for many individuals, helping to perpetuate a massive, international rumor mill which pays little heed to facts or the threat of legal action. Identity Theft Fraud is another spurious activity that has been able to evolve in the wake of easily accessible computers and the internet. Perhaps most problematic and prevalent of the various fraudulent activities is identity theft, in which personal details of innocent people are harvested by a third party so that they can be used for malicious purposes. This includes carrying out illicit online transactions and other damaging activities that can have serious ramifications. Cyber Bullying As with the defamation of public figures, the internet and computers have also made it easier for spiteful people to attack people they know personally as well as perfect strangers via the anonymous platforms that are available to them. This has led to serious incidents of cyber bullying involving both children and adults, sometimes with tragic consequences. The problem with these techniques is that they tend to go under the radar to an even greater degree than traditional bullying, which makes it harder to detect and correct. Gaming Addiction Whilst computers and the internet have made it easier for gambling addicts to get their fix, a new type of addiction has also arisen, in the form of addiction to videogames. This is something that can impact people of all ages and leads inevitably to a number of problems, from the social to the financial. Professionals are beginning to take gaming addiction seriously and combat it in the same way as other diseases. Privacy Whilst high profile cases of online identity theft and fraud should have caused people to become more careful about how they use their personal information, issues of privacy and a lack of appreciation for the risks are still widespread. This extends beyond simply giving away private data via chat rooms, message boards and ecommerce sites and extends into the compromising world of social media. Employers are now combing Facebook and Twitter to effectively do background checks on potential employees, paying particular attention to those that have not chosen to use privacy settings to prevent anyone from getting a look at their details. Education The educational properties of computers are well known and universally lauded, but having all the information in existence on tap has its own issues. In particular, the practice of plagiarism has become a major problem, as students can simply copy and paste whole chunks of text from online sources without attributing the work to anyone else. This has become the bane of educational institutions, which tend to come down hard on detected plagiarists in order to discourage similar activities from others. Terrorism & Crime Computers have been a positive force in allowing for the creation of global movements and righteous activism in a number of forms. However, the other side of the coin is that terrorists and organized criminals also exploit the web for their own nefarious purposes. Businesses, governments and individuals are all at risk of cyber-attack and the perpetrators can often act anonymously from a country with no extradition agreements. EMERGING TRENDS IN INFORMATION TECHNOLOGY FOR 2020 Technology is an ever-changing playing field and those wanting to remain at the helm of innovation have to adapt. The consumer journey is charting a new course and customers and companies alike are embracing emerging technologies. As the IT industry trends such as cloud computing and SaaS become more pervasive, the world will look to brands who can deliver with accuracy and real-time efficiency. To help meet the demands of a technology-enabled consumer base, businesses and solution providers must also turn toward the latest trends and possibilities provided by emerging innovations to realize their full potential. But, where to begin? These are the emerging trends businesses need to keep their eyes on in 2020. ARTIFICIAL INTELLIGENCE (AI) Computers may have changed the way we work, play, interact with each other, go to war, or pretty much any other aspect of life, but traditional computers haven’t really been that smart. For example, for a computer to complete the simple task of finding cat photos in an image search, you would first have to teach the computer what a cat is, by giving it lots of pictures with cats in, so that it could recognize similar pictures. Now, artificial intelligence (AI) has advanced to such a level that computers are capable of teaching themselves what a cat is – and other, slightly more valuable, activities. In fact, systems such as IBM’s cognitive computing platform Watson can carry out an ever-growing range of tasks without being taught how to do them. Computers now have the ability to learn, in much the same way as a human brain does, and this has been fueled by the massive increase in data and computing power. Quite simply, AI would be nothing without data. Even though AI technologies have existed for several decades, it’s the incredible explosion in data that has allowed it to advance so quickly over the last couple of years. Siri, for example, would have only a rudimentary understanding of our requests without the billions of hours of audio data that helped it learn our language. Therefore, it’s the masses of data that we have available that accelerates an AI system’s learning curve. The more data is having, the more it learns and, ultimately, the more accurate it becomes. What all this means in practice is that AI is helping computers undertake more and more human tasks. Thanks to AI, computers can see (think of Facebook’s facial recognition software), read (for example, analyzing Tweets both for content and sentiment), listen (‘Alexa, what’s the capital of Philippines?’), speak (‘The capital of Philippines is Manila’) and gauge our emotions (affective computing – more on this later). In this chapter we explore how AI works and how this trend will be massively influential in our world. What exactly is AI? AI can be defined as using computers to simulate the capacity for abstract, creative, deductive thought – particularly the ability to learn. Therefore, at the core of AI lies a vision of building machines that are capable of thinking like us humans. Introducing machine learning and deep learning The terms ‘AI’, ‘machine learning’ and ‘deep learning’ are often used interchangeably, and I use them interchangeably for ease in this chapter. However, they’re not quite the same thing. In fact, it’s machine learning and deep learning that allows machines to learn for themselves. Essentially machine learning is a subset of AI – or rather, it’s the very cutting edge of AI. If AI is the broader concept of machines becoming intelligent, machine learning is a specific application of that concept, whereby machines solve specific real-world problems by processing data via neural networks that mimic how a human brain functions. Now, we’ve progressed from machine learning into deep learning. Deep learning focuses even more narrowly on a subset of machine learning tools and techniques. Remember I said machine learning is the cutting edge of AI? Well, deep learning is the cutting edge of the cutting edge. Deep learning is essentially machine learning that uses deep neural networks, built by layering many neural networks on top of one another. Data is passed along networks of nodes, through a tangled web of algorithms, and these networks adapt according to whatever data they are processing as moves from node to node. This way, the neural networks can more efficiently process the next bit of data that comes along, based on the data that came before it – thus enabling a more complex simulation of human learning. This ability to ‘learn’ from data and the ability of a system to effectively teach itself is what makes deep learning so powerful. How AI and machine learning is being used in practice As machines become increasingly smarter, they’re able to perform tasks that were previously the domain of us humans. This has led to many predictions of humans losing their jobs to robots. We’ll get to that in the next chapter. For now, let’s look at some of the amazing things AI is capable of. AI in healthcare Healthcare has become a key industry for AI and machine learning. Not only have major players such as IBM and Microsoft jumped into their own AI healthcare projects, but several start-ups and smaller organizations have begun their own efforts to create tools to aid healthcare. Much of the AI work done thus far in healthcare is focused on disease identification and diagnosis. From Sophia Genetics, which is using AI to diagnose illnesses, to smart phone apps that can determine a concussion and monitor other concerns such as jaundice in newborns, disease and health monitoring is at the forefront of machine learning efforts. And machines are now learning how to read CT scans and other imaging diagnostic tests to identify abnormalities. All this is possible because computers and the algorithms they run can work through colossal amounts of data – much faster and more accurately than human scientists or medical professionals – to unearth patterns and predictions to enhance disease diagnosis. AI is transforming data-heavy industries like insurance. Because machines can process lots of data at a fast speed, they can uncover insights and patterns much more quickly and accurately than humans. For example, chatbots, which are driven by AI technology, are being used in messaging apps to help resolve claims and answer simple customer service queries. AI is also being used to identify possible fraudulent claims, based on patterns from other fraudulent claims, and highlight fishy cases for further investigation by a human. Natural language processing and natural language generation Natural language processing (where computers understand human speech) and natural language generation (where computers generate speech) are particularly interesting subsets of AI. This is what enables Alexa to understand your five-year-old when he or she asks to hear their new favorite song for the 50th time that day, and enables Alexa to talk back. Natural language generation (NLG) is where the really exciting stuff is happening – such as news stories being written by computers. In the US, the Associated Press is already publishing corporate earnings stories that are written by NLG engines. The Washington Post has Heliograph, a journalistic bot that generates automated content with an impressively strong editorial voice. And in the UK, news agency Press Association is using an automated chatbot-driven platform to write as many as 30,000 local news stories each month. Empathetic machines ‘Affective computing’ is another exciting area of AI, which involves machines being able to read our emotions and adjust their behavior accordingly – effectively making them emotionally intelligent. Programs are being developed that can analyze facial expressions, posture, gestures, tone of voice, speech, and other factors to register changes in a user’s emotional state. AI developer Affectiva’s Emotion AI technology, for instance, is already being used by 1,400 brands to judge the emotional effect of adverts on viewers. What this trend means for you AI technology may seem beyond the reach of the average business, but platforms like IBM’s Watson are opening up AI and machine learning to a much wider audience. In fact, there are many start-ups who are applying this technology to a wide range of industries and applications. They partnered with news automation specialists Urbs Media. Whatever industry you’re in, it’s likely that AI will have some impact in the coming years. While there are genuine concerns around what this may mean for people’s jobs, it’s important to keep an open mind about the incredible opportunities AI brings. 3D PRINTING IS CHANGING THE WAY WE PRODUCE THINGS Huge, smart factories and intelligent machines are one side of automation. The other side is a lot humbler. I’m talking about the 3D printer. This one invention is disrupting manufacturing, and other industries, in many positive ways. In pharma, for example, the first 3D printed drug was approved by the FDA in 2015.25 Human tissue has also been successfully recreated with 3D printing. As 3D printing technology improves, the scope of applications will extend to many more industries. For example, even if a traditional manufacturing assembly line isn’t replaced with 3D printers, the technology could still be used to quickly print and replace spare parts for machinery. Inventors will be able to create models and mockups of their ideas quickly and easily. Outside the world of work, 3D printing could alter many aspects of everyday life – from the products we buy (maybe even make ourselves at home), to the houses we live in, to the food we eat. Even chocolate is being manufactured with 3D printing technology. That’s right, printers exist that can print chocolate. Before you run off to buy one of these incredible chocolate-printing gadgets, let’s take a look at what 3D printing involves, and explore some of the fascinating ways 3D printing technology is being applied. What is 3d printing and how does it work? 3D printing (also known as additive manufacturing) is a means of creating 3D objects from a digital file using an additive process. It’s the opposite of traditional (subtractive) manufacturing, whereby an object is cut out or hollowed out of its material, e.g. plastic or metal, using a cutting tool or something like a milling machine. In 3D printing, the object is created by laying down, or adding, layers upon layers of material, building up until you have the finished object. Slice that object open, and you’d be able to see each of these thin layers, much like the rings of a tree trunk. This innovative layering approach means that far more complex shapes can be created than in traditional manufacturing – and using less material, too. The materials used in 3D printing can be pretty much anything: plastic, metal, concrete, liquid, powder, even chocolate or human tissue. How 3D printing will change manufacturing 3D printers don’t cut out, drill or mill a product from its source material. Instead, they start from nothing and build up the product from there. This means 3D printing uses far less material than traditional manufacturing methods, and it means that one-off items can be made quickly and easily, without needing to worry about economies of scale. Not only is this better for our environment, it will also lead to significant cost savings for manufacturers. And those costs can extend to infrastructure costs as well as materials. What this trend means for you If your business involves manufacturing products or components of any kind, you’d do well to consider how 3D printing could enhance your operations. While it’s fair to say that 3D printing is a long way from being ubiquitous, examples like Adidas and GE show us how the technology is advancing to a point where it can challenge traditional methods of mass production. Of course, people thought that print-on-demand would put bookshops out of business, and that hadn’t been the case. So perhaps 3D printing will remain a specialist process. This is one trend where time will tell. But, what’s particularly exciting about 3D printing is the opportunities it offers for customization of products and designs to suit one-off requests and orders. In this age of online platforms anticipating our every wish, and making personalized recommendations on what we might like to buy, read, watch or listen to next, consumers are getting very used to highly personalized services. Businesses like Amazon and Netflix have done extremely well for themselves by figuring out exactly what their customers want, and then giving it to them. 3D printing provides yet more scope for personalization and customization, and I think that might be the key to its success. THE FUTURE IS ALREADY HERE: VIRTUAL REALITY (VR) AND AUGMENTED REALITY (AR) VR and AR represent the next huge leap in interface innovation. More than just sci-fi, VR and AR are already finding very real applications in our world, and are likely to change the way we interact with technology. But what is VR exactly? And how does AR differ? In a nutshell, the term VR refers to the use of computer technology to fully immerse the user in a simulated 3D environment, to the extent that the user feels like they are physically in that environment. AR, on the other hand, is rooted very much in the real world, not a simulated environment. With AR, information or objects are overlaid onto what the user is seeing in the real world. Both technologies typically work using special headsets or glasses, like Oculus Rift or Google Glass, but apps are available to offer VR and AR experiences through a smart phone (Google Cardboard being one example). How brands are already tapping into VR technology There’s quite a lot of hype around the potential uses of VR, and with good reason. Research has shown that consumers are more likely to buy from a brand that uses VR,42 meaning the technology has the potential to absolutely transform marketing. Here are some ways big brands are already harnessing VR to create a better experience for consumers: Mercedes created a virtual experience of driving the latest SL model down California’s beautiful Pacific Coast Highway. Oreo enticed cookie lovers with an animated virtual land, complete with chocolate canyons, to promote a new cookie flavor. Footwear manufacturer Toms is known for its philanthropic efforts, giving a pair of shoes to someone in need for every pair the company sells. The company recently created an emotive immersive experience, taking users on a giving trip to a remote Peruvian village. AR in action AR technology may be in its infancy, but that hasn’t stopped businesses making good use of it. AR creates a ‘mixed environment’, blending virtual objects or data with the real-world environment – an approach that’s proving particularly useful in the manufacturing sector. Modern manufacturing may involve putting together many, many complex components, each of them different. Using an AR device, you can have instructions or schematics available at a glance, right in front of your eyes, while you’re looking at the component in question. This can also be incredibly useful in a maintenance setting, and potentially save companies a lot of time and money on training. Mitsubishi Electric, for example, is developing AR- driven maintenance support, using a 3D model that shows the user the correct order to inspect a piece of equipment. Maintenance staff can also log inspection results with their voice. But the applications of AR don’t end in the manufacturing world. All sorts of organizations are developing AR experiences: PepsiCo recently took over a London bus shelter to create an incredible AR-enabled display that tricked commuters by overlaying images onto the real-life street in front of them. These images included a meteor crashing into the ground, a tiger padding towards them, and a large tentacle popping up from underneath the paving slabs! The US Army is harnessing AR to improve soldiers’ situational awareness, using an eyepiece that helps them precisely locate their position, locate others around them, and identify whether they’re a friend or a foe. What this trend means for you At the most basic level, this trend requires all businesses to deliver a good mobile experience, and that means having a website that works seamlessly on phones and tablets. This may sound obvious, but, according to one survey, almost a fifth of small businesses still don’t have a mobile-friendly website. Going beyond mobile, businesses must be ready to offer their customers an AI-enabled chatbot experience, whether that means integrating Alexa (and other AI assistant) technology into your products, or putting chatbots to work in your customer service, marketing and sales functions. And looking even further ahead, the way we interact with technology is going to become more and more immersive. (Even more immersive than having a virtual assistant with you 24/7, ready to respond to your every whim.) Companies who can begin developing VR and AR experiences are very likely to reap the rewards in the longer term. APPLICATION 1. How will artificial intelligence change the future? Justify your answer. 2. Scenario: A man in North Cagayan was reportedly using stolen information for online orders, even using a driver's license with the incorrect name on it, according to local authorities once they apprehended him. The name that the thief was using belonged to someone who used to live at the address used. The UPS delivery driver responding to the orders was the one who noticed the discrepancy and contacted the authorities as a result. Analyze the statement above and answer the following questions: a. What social issue does the situation belong? b. Basing from your answer in number 1, why do you say that this is the social issue? Explain and defend your answer. c. As a future IT expert, how will you avoid/prevent such situation? FEEDBACK Write your answer on the space provided. 1. What does ICT Social issues mean? ___________________________________________________________ ___________________________________________________________ 2. What is the difference between augmented reality and virtual reality? ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ 3. _______________ is presenting someone else’s work or ideas as your own, with or without their consent, by incorporating it into your work without full acknowledgement. All published and unpublished material, whether in manuscript, printed or electronic form, is covered under this definition. 4. Thinking about games all or a lot of the time is a symptom of addiction. True or false. _______________ 5. Give an example of cyber-bullying. ________________________________________________________ ________________________________________________________ 6. Which of these would be considered an AI? a. Mobile Phone c. Alexa b. Laptop d. Television ___________________________________ 7. It is a process of making three dimensional solid objects from a digital file. ___________________________________ 8. Enumerate and describe the ICT Social Issues. 1. _______________________________ 2. _______________________________ 3. _______________________________ 9. What are the emerging trends in ICT that will help the future? 1. _______________________________ 2. _______________________________ 3. _______________________________ 10. Give at least 3 advantages of emerging trends in ICT. _________________________________________ _________________________________________ _________________________________________ TOPIC 3.2 COMPUTER ETHICS AND UNDERSTANDING OF THE ACM REQUIREMENTS LEARNING OBJECTIVES At the end of the session, you will be able to: 1. 2. 3. 4. identify ethical issues in different enterprise computing settings. Review real-life ethical cases and be able to develop ethical resolutions and policies. Understand laws and regulations related to ethics. Apply the requirements of ACM in making IT Research ACTIVATING PRIOR LEARNING When you hear/read the word “ETHICS”, what comes to your mind? PRESENTATION OF CONTENT WHAT IS ETHICS? Ethics are a structure of standards and practices that influence how people lead their lives. It is not strictly implemented to follow these ethics, but it is basically for the benefit of everyone that we do. Ethics are unlike laws that legally mandate what is right or wrong. Ethics illustrate society’s views about what is right and what is wrong. Computer Ethics Computer ethics are a set of moral standards that govern the use of computers. It is society’s views about the use of computers, both hardware and software. Privacy concerns, intellectual property rights and effects on the society are some of the common issues of computer ethics. Privacy Concerns Hacking – is unlawful intrusion into a computer or a network. A hacker can intrude through the security levels of a computer system or network and can acquire unauthorized access to other computers. Malware – means malicious software which is created to impair a computer system. Common malware are viruses, spyware, worms and trojan horses. A virus can delete files from a hard drive while a spyware can collect data from a computer. Data Protection – also known as information privacy or data privacy is the process of safeguarding data which intends to influence a balance between individual privacy rights while still authorizing data to be used for business purposes. Anonymity – is a way of keeping a user’s identity masked through various applications. Intellectual Property Rights Copyright – is a form of intellectual property that gives proprietary publication, distribution and usage rights for the author. This means that whatever idea the author created cannot be employed or disseminated by anyone else without the permission of the author. Plagiarism – is an act of copying and publishing another person’s work without proper citation. It’s like stealing someone else’s work and releasing it as your own work. Cracking – is a way of breaking into a system by getting past the security features of the system. It’s a way of skipping the registration and authentication steps when installing a software. Software License – allows the use of digital material by following the license agreement. Ownership remains with the original copyright owner; users are just granted licenses to use the material based on the agreement. Effects on Society Jobs – Some jobs have been abolished while some jobs have become simpler as computers have taken over companies and businesses. Things can now be done in just one click whereas before it takes multiple steps to perform a task. This change may be considered unethical as it limits the skills of the employees. There are also ethical concerns on health and safety of employees getting sick from constant sitting, staring at computer screens and typing on the keyboard or clicking on the mouse. Environmental Impact – Environment has been affected by computers and the internet since so much time spent using computers increases energy usage which in turn increases the emission of greenhouse gases. There are ways where we can save energy like limiting computer time and turning off the computer or putting on sleep mode when not in use. Buying energy efficient computers with Energy Star label can also help save the environment. Social Impact – Computers and the internet help people stay in touch with family and friends. Social media has been very popular nowadays. Computer gaming influenced society both positively and negatively. Positive effects are improved hand-eye coordination, stress relief and improved strategic thinking. Negative effects are addiction of gamers, isolation from the real world and exposure to violence. Computer technology helps the government in improving services to its citizens. Advanced database can hold huge data being collected and analyzed by the government. Computer technology aids businesses by automating processes, reports and analysis. UNDERSTANDING THE ACM REQUIREMENTS What is ACM? ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking. Origins The Association for Computing Machinery was founded as the Eastern Association for Computing Machinery at a meeting at Columbia University in New York on September 15, 1947. Its creation was the logical outgrowth of increasing interest in computers as evidenced by several events, including a January 1947 symposium at Harvard University on large-scale digital calculating machinery; the six-meeting series in 1946-47 on digital and analog computing machinery conducted by the New York Chapter of the American Institute of Electrical Engineers; and the sixmeeting series in March and April 1947, on electronic computing machinery conducted by the Department of Electrical Engineering at Massachusetts Institute of Technology. In January 1948, the word "Eastern" was dropped from the name of the Association. In September 1949, a constitution was instituted by membership approval. Scope The original notice for the September 15, 1947, organization meeting stated in part: "The purpose of this organization would be to advance the science, development, construction, and application of the new machinery for computing, reasoning, and other handling of information." The first and subsequent constitutions for the Association have elaborated on this statement, although the essential content remains. The present constitution states: "The Association is an international scientific and educational organization dedicated to advancing the art, science, engineering, and application of information technology, serving both professional and public interests by fostering the open interchange of information and by promoting the highest professional and ethical standards." Publications ACM publishes, distributes and archives original research and firsthand perspectives from the world's leading thinkers in computing and information technologies that help computing professionals negotiate the strategic challenges and operating problems of the day. ACM publishes journals, plus newsletters and annual conference proceedings. ACM is also recognized worldwide for its published curricula recommendations, both for colleges and universities and for secondary schools that are increasingly concerned with preparing students for advanced education in the information sciences and technologies. Communications of the ACM, keeps information technology professionals up-to-date with articles spanning the full spectrum of information technologies in all fields of interest. Communications also carries case studies, practitioner-oriented articles, and regular columns and blogs. The monthly magazine is distributed to all ACM members. ACM Queue is a monthly magazine created by computing professionals for computing professionals that sets out to define future problems with the sort of detail and intelligence that readers in turn can use to sharpen their own thinking. Visit the ACM Digital Library for a complete list of ACM publications. ACM also provides the ACM Digital Library, the definitive online resource for computing professionals. The DL provides access to ACM's collection of publications and bibliographic citations from the universe of published IT literature. With its personalized online services and extensive search capabilities, the Digital Library represents ACM's vision of an all-electronic publishing program. The Digital Library contains the citations and full text of articles, representing all of ACM's journals, newsletters, and proceedings. Each citation contains links to other works by the same author; clickable references to their original sources; links to similar articles and critical reviews, if available; and digital object identifiers (DOIs) to easily manage electronic linkages to vendors. In addition, the DL consists of a bibliographic database of more than a million citations from a broad range of information technology publications and publishers. Many of these citations contain abstracts and/or reference sections as well. FORMAT/TEMPLATE OF ACM RESEARCH PUBLICATIONS APPLICATION 3. State at least two privacy concerns with scenarios and explain how will you prevent that situations. FEEDBACK Write your answer on the space provided. 1. What is copyright? ___________________________________________________________ ___________________________________________________________ 2. It refers to activities that seek to compromise digital devices, such as computers, smartphones, tablets, and even entire networks ___________________________________________________________ 3. What is the difference of Hacking and Cracking? ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ 4. Why do you need to study ACM? ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ 5. Give at least 3 main points of ACM. _________________________________________ _________________________________________ _________________________________________ TOPIC 3.2 HARDWARE AND SOFTWARE TECHTRENDS LEARNING OBJECTIVES At the end of the session, you will be able to: 1. identify the latest innovations of hardware and software. 2. Understand why and how innovation is important. 3. Demonstrate and apply the use of latest Tech trends. ACTIVATING PRIOR LEARNING Based from the picture below, what is your idea? PRESENTATION OF CONTENT HARDWARE TECH TRENDS Information technology is revolutionizing products. Once composed solely of mechanical and electrical parts, products have become complex systems that combine hardware, sensors, data storage, microprocessors, software, and connectivity in myriad ways. These “smart, connected products”—made possible by vast improvements in processing power and device miniaturization and by the network benefits of ubiquitous wireless connectivity—have unleashed a new era of competition. Smart, connected products offer exponentially expanding opportunities for new functionality, far greater reliability, much higher product utilization, and capabilities that cut across and transcend traditional product boundaries. The changing nature of products is also disrupting value chains, forcing companies to rethink and retool nearly everything they do internally. These new types of products alter industry structure and the nature of competition, exposing companies to new competitive opportunities and threats. They are reshaping industry boundaries and creating entirely new industries. In many companies, smart, connected products will force the fundamental question, “What business am I in?” Smart, connected products raise a new set of strategic choices related to how value is created and captured, how the prodigious amount of new (and sensitive) data they generate is utilized and managed, how relationships with traditional business partners such as channels are redefined, and what role companies should play as industry boundaries are expanded. The phrase “internet of things” has arisen to reflect the growing number of smart, connected products and highlight the new opportunities they can represent. Yet this phrase is not very helpful in understanding the phenomenon or its implications. The internet, whether involving people or things, is simply a mechanism for transmitting information. What makes smart, connected products fundamentally different is not the internet, but the changing nature of the “things.” It is the expanded capabilities of smart, connected products and the data they generate that are ushering in a new era of competition. What Are Smart, Connected Products? Smart, connected products have three core elements: physical components, “smart” components, and connectivity components. Smart components amplify the capabilities and value of the physical components, while connectivity amplifies the capabilities and value of the smart components and enables some of them to exist outside the physical product itself. The result is a virtuous cycle of value improvement. Some have suggested that the internet of things “changes everything,” but that is a dangerous oversimplification. Physical components comprise the product’s mechanical and electrical parts. In a car, for example, these include the engine block, tires, and batteries. Smart components comprise the sensors, microprocessors, data storage, controls, software, and, typically, an embedded operating system and enhanced user interface. In a car, for example, smart components include the engine control unit, antilock braking system, rainsensing windshields with automated wipers, and touch screen displays. In many products, software replaces some hardware components or enables a single physical device to perform at a variety of levels. Connectivity components comprise the ports, antennae, and protocols enabling wired or wireless connections with the product. Connectivity takes three forms, which can be present together: One-to-one: An individual product connects to the user, the manufacturer, or another product through a port or other interface—for example, when a car is hooked up to a diagnostic machine. One-to-many: A central system is continuously or intermittently connected to many products simultaneously. For example, many Tesla automobiles are connected to a single manufacturer system that monitors performance and accomplishes remote service and upgrades. Many-to-many: Multiple products connect to many other types of products and often also to external data sources. An array of types of farm equipment are connected to one another, and to geolocation data, to coordinate and optimize the farm system. For example, automated tillers inject nitrogen fertilizer at precise depths and intervals, and seeders follow, placing corn seeds directly in the fertilized soil. What Is Smart Home Technology? What if all the devices in your life could connect to the int ernet? Not just computers and smartphones, but everything: clocks, speakers, lights, doorbells, cameras, windows, window blinds, hot water heaters, appliances, cooking utensils, you name it. And what if those devices could all communicate, send you information, and take your commands? It's not science fiction; it's the Internet of Things (IoT), and it's a key component of home automation and smart homes. Home automation is exactly what it sounds like: automating the ability to control items around the house—from window shades to pet feeders—with a simple push of a button (or a voice command). Some activities, like setting up a lamp to turn on and off at your whim, are simple and relatively inexpensive. Others, like advanced surveillance cameras, may require a more serious investment of time and money. There are many smart home product categories, so you can control everything from lights and temperature to locks and security in your home Smart Home Hubs and Controllers The Echo is a Bluetooth speaker powered by Alexa, Amazon's handy voice assistant. Alexa works with a number of smart home devices directly, as well as with If This Then That (IFTTT) to control plenty of others via "recipes" you can create yourself. It'll take some work, but you can use Alexa to control most of the gadgets in your house by the sound of your voice. If you already have a favorite speaker, the inexpensive Echo Dot can connect to it and add Alexa functionality. Brilliant Control The Brilliant Control is a unique wall switch that uses Wi-Fi to connect to and control various smart devices in your home. It has a 5-inch color touch screen with user-friendly button controls that let you play music, control lighting, set thermostat temperatures, and see who is at your door, among other things. It works with many popular smart home platforms including Ecobee, Nest, Philips Hue, Ring, and Sonos, and it has built-in Amazon Alexa voice support that allows it to do almost everything an Echo device can do. It's fairly pricey and requires wiring knowledge to install, but it's a smart addition to a high-tech home. Smart Home Surveillance Cameras Arlo Ultra The Arlo Ultra raises the bar for all outdoor cameras. It's the first model we've seen that streams and records video in true 4K, or Ultra High Definition (UHD). At $400, it's also one of the most expensive cameras out there, but it's loaded with cool tech including automatic zooming, motion tracking, color night vision, an integrated spotlight and siren, one-click 911 connectivity, a 180degree field of view, and more. It's also completely wireless and a snap to install. Software Platform Trends and Emerging Technologies There are five major themes in contemporary software platform evolution: 1. 2. 3. 4. 5. Linux and open-source software Java Enterprise software Web services and service-oriented architecture Software outsourcing Open-source software is software produced by a community of several hundred thousand of programmers around the world, and is available free of charge to be modified by users, with minimal restrictions. The premise that open-source software is superior to commercial software is based on the ability of thousands of programmers modifying and improving the software at a much faster rate. In return for their work, programmers receive prestige and access to a network of other programmers, and additional for-pay work opportunities. The process of improving open source software is monitored by self-organized, professional programming communities. Thousands of open-source programs, ranging from operating systems to office suites, are available from hundreds of Web sites. Linux, an operating system related to Unix, is one of the most well-known open-source software, and is the world's fastest growing client and server operating system, along with related Linux applications. The rise of open-source software, particularly Linux and the applications it supports, has profound implications for corporate software platforms: cost reduction, reliability and resilience, and integration, because Linux works on all the major hardware platforms from mainframes to servers to clients. Because of its reliability, low cost, and integration features, Linux has the potential to break Microsoft's monopoly of the desktop. Java, an operating system-independent, object-oriented programming language, has become the leading programming environment for the Web, and its use has migrated into cellular phones, cars, music players, and more. For each of the computing environments in which Java is used, Sun has created a Java Virtual Machine that interprets Java programming code for that machine. In this manner, the code is written once and can be used on any machine for which there exists a Java Virtual Machine. A Macintosh PC, an IBM PC running Windows, a Sun server running Unix, and even a smart cellular phone or personal digital assistant can share the same Java application. Java is typically used to create small Web programs called applets, but is also a very robust language designed to handle text, data, graphics, sound, and video. Java enables PC users to manipulate data on networked systems using Web browsers, reducing the need to write specialized software. A Web browser is an easy-to-use software tool with a graphical user interface for displaying Web pages and for accessing the Web and other Internet resources. Software for enterprise integration is one of the most urgent software priorities today for U.S. firms who need to integrate existing legacy software with newer technology. Replacing isolated systems that cannot communicate with enterprise software is one solution; however, many companies cannot simply jettison essential legacy mainframe applications. Some integration can be achieved by middleware, software that creates an interface or bridge between two different systems. Firms increasingly purchase enterprise application integration (EAI) software that enables multiple systems to exchange data through a single software hub. FIGURE 1. ENTERPRISE APPLICATION INTEGRATION (EAI) SOFTWARE VERSUS TRADITIONAL INTEGRATION EAI software (a) uses special middleware that creates a common platform with which all applications can freely communicate with each other. EAI requires much less programming than traditional point-to-point integration (b). Web services, loosely coupled software components that use Web communication standards, can exchange information between different systems regardless of operating system of programming language. Web services technology is founded on Extensible Markup Language (XML). XML was developed as a more powerful markup language than Hypertext Markup Language (HTML), a page description language specifying how content appears on Web pages. By marking data with XML tags, computers can interpret, manipulate, and exchange data from different systems. Web services communicate through XML messages over standard Web protocols, such as: SOAP (Simple Object Access Protocol) is a set of rules for structuring messages that enables applications to pass data and instructions to one another. WSDL (Web Services Description Language) is a common framework for describing the tasks performed by a Web service and the commands and data it will accept so that it can be used by other applications. UDDI (Universal Description, Discovery, and Integration) enables a Web service to be listed in a directory of Web services so that it can be easily located. Using these protocols, a software application can connect freely to other applications without custom programming for each different application with which it wants to communicate. The collection of Web services used to build a firm's software systems constitutes a service-oriented architecture (SOA). SOA is an entirely new way of developing software for a firm. In the past, separate applications were written for different divisions and tasks and could not communicate with each other. In an SOA environment, a single application can be used and reused as a "service" that can be used by other services. For example, an "invoice service" can be written that is the only program in the firm responsible to calculating invoice information and reports. Virtually all major software vendors provide tools and entire platforms for building and integrating software applications using Web services. FIGURE 2. HOW DOLLAR RENT A CAR USES WEB SERVICES Dollar Rent A Car uses Web services to provide a standard intermediate layer of software to “talk” to other companies’ information systems. Dollar Rent A Car can use this set of Web services to link to other companies’ information systems without having to build a separate link to each firm’s systems. Other software trends include: Ajax (Asynchronous JavaScript and XML): Ajax, and a related set of techniques called RIA ("rich Internet applications") use JavaScript or Macromedia Flash programs downloaded to your client to maintain a near continuous conversation with the server you are using. While making the life of consumers much easier, Ajax and RIA are even more important for another new software development: Web-based applications. Web-based applications: Software firms are delivering software services over the Web to client computers and their customer's sites. Google's Google Apps for Your Domain is a Web-based suite of productivity tools, including online spreadsheet, word processing, and calendars, aimed at small businesses. Mashups: Part of a movement called Web 2.0, and in the spirit of musical mashups, Web mashups combine the capabilities of two or more online applications to create a kind of hybrid that provides more customer value than the original sources alone. For example, housingmaps.com can display real estate listings in local areas from Craigslist.com overlaid on Google Maps, with pushpins showing the location of each listing. The result of these techniques is that instead of the Web being a collection of pages, it becomes a collection of capabilities, a platform where thousands of programmers can create new services quickly and inexpensively. Web 2.0 refers to "the new Web applications" like those above and is also the name of an annual conference. Web 2.0 can be described also as an expression of all the changes above, plus changes in the way people and business use the Web and think about human interaction on the Web. These changes include seeing the Web applications as services, not packaged software, seeing users as co-developers, harnessing collective intelligence, and lightweight user interfaces, development models, and business models. Although traditionally businesses developed unique software themselves, today most new software is purchased from external sources. There are three external sources for software: Commercial software packages Software services from an application service provider (ASP) Outsourcing application development to an outside software firm FIGURE 3. THE CHANGING SOURCES OF SOFTWARE U.S. firms will spend nearly $340 billion on software in 2006. Over 30 percent of that software will come from outsourcing its development and operation to outside firms, and another 15 percent will come from purchasing the service from application service providers either on the Web or through traditional channels. Sources: Authors estimates; Bureau of Economic Analysis, 2006; IT Spending and Trends, eMarketer, 2004; IT Spending and Trends, eMarketer, 2005; SEC 10K statements, various firms. A commercial software package is a prewritten set of software programs for certain functions, eliminating the need for a firm to write its own software program. Enterprise systems are so complex that few corporations have the expertise to develop these in house and instead rely on enterprise software packages from vendors such as SAP and PeopleSoft. An application service provider (ASP) is a business that delivers and manages applications and computer services from remote computer centers to multiple users using the Internet or a private network. The software is paid for typically on a per-user, subscription, or per-transaction basis. Renting enterprise software avoids the expense and difficulty of installing, operating, and maintaining the hardware and software needed for complex systems. Large and medium-sized businesses are using ASPs for enterprise systems, sales force automation, or financial management, and small businesses are using them for functions such as invoicing, tax calculations, electronic calendars, and accounting. Application service providers also enable small and medium-sized companies to use applications that they otherwise could not afford. In outsourcing, a firm contracts custom software development or maintenance to outside firms, frequently firms operating in low-wage areas of the world. With the growing sophistication and experience of offshore firms, more and more new-program development is outsourced. APPLICATION 1. Are smart homes a good idea? Justify your answer. 2. Should you build a smart home? Explain your answer. 3. Explain the flow of ASP based from the picture below. FEEDBACK Write your answer on the space provided. 1. What is Smart Home? ___________________________________________________________ ___________________________________________________________ 2. It is a software or program that is designed and developed for licensing or sale to end users It was once considered to be proprietary software. ___________________________________________________________ 3. What is AJAX? ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ 4. Give 5 examples of Web Software Applications ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ 5. Give at least 2 Hardware Tech Trends and their uses. _________________________________________ _________________________________________ _________________________________________ TOPIC 3.4 INTERNET OF THINGS LEARNING OBJECTIVES At the end of the session, you will be able to: 4. Discuss Internet of Things(IoT); 5. Explain the importance of Internet of Things (IoT); 6. Manifest appreciation on the applications of Internet of Things (IoT); ACTIVATING PRIOR LEARNING From the image below, define Internet of Things. PRESENTATION OF CONTENT What is Internet of Things (IoT)? The term Internet of Things generally refers to scenarios where network connectivity and computing capability extends to objects, sensors and everyday items not normally considered computers, allowing these devices to generate, exchange and consume data with minimal human intervention. There is, however, no single, universal definition. In simple words, Internet of Things (IoT) is an ecosystem of connected physical objects that are accessible through the Internet. It is also referred to as Machine-to-Machine (M2M), Skynet or Internet of Everything. Components of IoT Smart Systems and Internet of Things are driven by a combination of: 1. Sensors 2. Connectivity 3. People and Processes Why IoT? Dynamic control of industry and daily life Improves the resource utilization ratio Integrating human society and physical systems Flexible configuration Acts as technology integrator Universal inter-networking How can IoT help? 1. IoT platforms can help organizations reduce cost through improved process efficiency, assess utilization and productivity. 2. The growth and convergence of data, processes and things on the internet would make such connections more relevant and important, creating more opportunities for people, businesses and industries. Applications of IOT A NEST LEARNING THERMOSTAT reporting on energy usage and local weather. A RING DOORBELL connected to the Internet An AUGUST HOME SMART LOCK connected to the Internet The extensive set of applications for IoT devices is often divided into: Consumer; Commercial; Industrial; and Infrastructure Spaces CONSUMER APPLICATIONS A growing portion of IoT devices are created for consumer use, including connected vehicles, home automation, wearable technology, connected health, and appliances with remote monitoring capabilities 1. Smart home A smart home or automated home could be based on a platform or hubs that control smart devices and appliances. For instance, using Apple's HomeKit, manufacturers can have their home products and accessories controlled by an application in iOS devices such as the iPhone and the Apple Watch. 2. Elder care One key application of a smart home is to provide assistance for those with disabilities and elderly individuals. These home systems use assistive technology to accommodate an owner's specific disabilities. Voice control can assist users with sight and mobility limitations while alert systems can be connected directly to cochlear implants worn by hearing-impaired users. They can also be equipped with additional safety features. These features can include sensors that monitor for medical emergencies such as falls or seizures. Smart home technology applied in this way can provide users with more freedom and a higher quality of life. ORGANISATIONAL APPLICATIONS 1. Medical and healthcare The Internet of medical things (IoMT) is an application of the IoT for medical and health related purposes, data collection and analysis for research, and monitoring.The IoMT has been referenced as "Smart Healthcare", as the technology for creating a digitized healthcare system, connecting available medical resources and healthcare services. 2. Transportation The IoT can assist in the integration of communications, control, and information processing across various transportation systems. Application of the IoT extends to all aspects of transportation systems. Dynamic interaction between these components of a transport system enables interand intra-vehicular communication, smart traffic control, smart parking, electronic toll collection systems, logistics and fleet management, vehicle control, safety, and road assistance. Digital variable speed-limit sign 3. V2X communications In vehicular communication systems, vehicle-to-everything communication (V2X), consists of three main components: vehicle to vehicle communication (V2V), vehicle to infrastructure communication (V2I) and vehicle to pedestrian communications (V2P). V2X is the first step to autonomous driving and connected road infrastructure 4. Building and home automation IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings (e.g., public and private, industrial, institutions, or residential) in home automation and building automation systems. In this context, three main areas are being covered in literature: The integration of the Internet with building energy management systems in order to create energy efficient and IOT-driven "smart buildings". The possible means of real-time monitoring for reducing energy consumption and monitoring occupant behaviors. The integration of smart devices in the built environment and how they might to know how to be used in future applications. INDUSTRIAL APPLICATIONS Also known as IIoT, industrial IoT devices acquire and analyze data from connected equipment, (OT) operational technology, locations and people. Combined with operational technology (OT) monitoring devices, IIoT helps regulate and monitor industrial systems. 1. Manufacturing The IoT can realize the seamless integration of various manufacturing devices equipped with sensing, identification, processing, communication, actuation, and networking capabilities. Based on such a highly integrated smart cyber-physical space, it opens the door to create whole new business and market opportunities for manufacturing. 2. Agriculture There are numerous IoT applications in farming such as collecting data on temperature, rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to automate farming techniques, take informed decisions to improve quality and quantity, minimize risk and waste, and reduce effort required to manage crops. INFRASTRUCTURE APPLICATIONS Monitoring and controlling operations of sustainable urban and rural infrastructures like bridges, railway tracks and on- and offshore wind-farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. 1. Metropolitan scale deployments There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For example, Songdo, South Korea, the first of its kind fully equipped and wired smart city, is gradually being built, with approximately 70 percent of the business district completed as of June 2018. Much of the city is planned to be wired and automated, with little or no human intervention. 2. Energy management Significant numbers of energy-consuming devices (e.g. lamps, household appliances, motors, pumps, etc.) already integrate Internet connectivity, which can allow them to communicate with utilities not only to balance power generation but also helps optimize the energy consumption as a whole. 3. Environmental monitoring Environmental monitoring applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions, and can even include areas like monitoring the movements of wildlife and their habitats. 4. Living Lab Another example of integrating the IoT is Living Lab which integrates and combines research and innovation process, establishing within a public-private-peoplepartnership. MILITARY APPLICATIONS The Internet of Military Things (IoMT) is the application of IoT technologies in the military domain for the purposes of reconnaissance, surveillance, and other combat-related objectives. It is heavily influenced by the future pro spec ts of warfare in an urban environment and involves the use of sensors, munitions, vehicles, robots, human-wearable biometrics, and other smart technology that is relevant on the battlefield. c 1. Internet of Battlefield Things The Internet of Battlefield Things (IoBT) is a project initiated and executed by the U.S. Army Research Laboratory (ARL) that focuses on the basic science related to IoT that enhance the capabilities of Army soldiers. In 2017, ARL launched the Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), establishing a working collaboration between industry, university, and Army researchers to advance the theoretical foundations of IoT technologies and their applications to Army operations. 2. Ocean of Things The Ocean of Things project is a DARPA-led program designed to establish an Internet of Things across large ocean areas for the purposes of collecting, monitoring, and analyzing environmental and vessel activity data. The project entails the deployment of about 50,000 floats that house a passive sensor suite that autonomously detect and track military and commercial vessels as part of a cloud-based network. PRODUCT DIGITISATION There are several applications of smart or active packaging in which a QR code or NFC tag is affixed on a product or its packaging. The tag itself is passive, however it contains a unique identifier (typically a URL) which enables a user to access digital content about the product via a smartphone. Strictly speaking, such passive items are not part of the Internet of Thing but they can be seen as enablers of digital interactions. The term "Internet of Packaging" has been coined to describe applications in which unique identifiers are used, to automate supply chains, and are scanned on large scale by consumers to access digital content. Challenges Faced by Internet of Things (IoT) At present IoT is faced with many challenges, like: 1. Scalability 2. Security 3. Technical Requirements 4. Technological Standardization 5. Software Complexity Solutions to the Challenges Several solutions are proposed to overcome the problems. Some are: Overcoming compatibility issues is a significant IoT hurdle, but emerging companies are starting to enable increased interoperability through opensource development. Governments and industry bodies need to set standards and regulations for the various industries to ensure that data is not misused. IoT needs strong authentication methods, encrypted data and a platform that can track irregularities on a network APPLICATION 4. How is Internet of Things (IoT) being applied in Education? 5. What are the possible drawbacks of Internet of Things (IoT) in Education? FEEDBACK Write your answer on the space provided. 1. What does the term ''Internet of Things'' mean? a. All of the things in your home are internet-enabled b. Traditional internet-enabled devices we use to connect c. The list of different things you can find on the internet d. Everyday objects with internet communication capabilities ___________________________________ 2. Who or what do IoT devices talk or connect with? a. Us and other devices c. Nothing b. Themselves only d. Our employers ___________________________________ 3. It is the process of inserting smart tags in the products you already sell, so that when someone taps on that product with their smartphone they spark a digital interaction on their device. ___________________________________ 4. It is the application of IoT technologies in the military domain for the purposes of reconnaissance, surveillance, and other combat-related objectives. ___________________________________ 5. It is an application of the IoT for medical and health related purposes, data collection and analysis for research, and monitoring. ___________________________________ 6. Which of these would be considered an IoT device? c. A laptop computer c. A wifi-enabled thermostat d. A smartphone with applications d. A mobile device like a tablet ___________________________________ 7. It is one in which the various electric and electronic appliances are wired up to a central computer control system so they can either be switched on and off at certain times or if certain events happen. ___________________________________ 8. Enumerate the components of Internet of Things (IoT). 4. _______________________________ 5. _______________________________ 6. _______________________________ 9. Enumerate the challenges that Internet of Things (IoT) faced. 4. _______________________________ 5. _______________________________ 6. _______________________________ 7. _______________________________ 8. _______________________________ TOPIC 3.5. MOBILE APPLICATION LEARNING OBJECTIVES At the end of the session, you will be able to: 1. Discuss Mobile Application; 2. Explain the importance of Mobile Application; 3. Manifest appreciation on the applications of Mobile Application ACTIVATING PRIOR LEARNING From the image below, define Mobile Application. PRESENTATION OF CONTENT What is Mobile Application? A mobile application, most commonly referred to as an app, is a type of application software designed to run on a mobile device, such as a smartphone or tablet computer. Mobile applications frequently serve to provide users with similar services to those accessed on PCs. Apps are generally small, individual software units with limited function. This use of app software was originally popularized by Apple Inc. and its App Store, which offers thousands of applications for the iPhone, iPad and iPod Touch. A mobile application also may be known as an app, web app, online app, iPhone app or smartphone app. Mobile Application Components There are four different types of app components: 1. Activities An activity is the entry point for interacting with the user. It represents a single screen with a user interface. 2. Services A service is a general-purpose entry point for keeping an app running in the background for all kinds of reasons. It is a component that runs in the background to perform long-running operations or to perform work for remote processes. A service does not provide a user interface. 3. Broadcast receivers A broadcast receiver is a component that enables the system to deliver events to the app outside of a regular user flow, allowing the app to respond to system-wide broadcast announcements. Because broadcast receivers are another well-defined entry into the app, the system can deliver broadcasts even to apps that aren't currently running. 4. Content providers A content provider manages a shared set of app data that you can store in the file system, in a SQLite database, on the web, or on any other persistent storage location that your app can access. Through the content provider, other apps can query or modify the data if the content provider allows it. What is Mobile App Architecture? Application architecture is a set of technologies and models for the development of fully-structured mobile programs based on industry and vendor-specific standards. As you develop the architecture of your app, you also consider programs that work on wireless devices such as smartphones and tablets. Mobile app architecture design usually consists of multiple layers, including: Presentation Layer - contains UI components as well as the components processing them. Business Layer - composed of workflows, business entities and components. Data layer - comprises data utilities, data access components and service agents. Types of Mobile Apps There are three main types of mobile apps including native apps, web-based mobile apps and hybrid apps. 1. Native Mobile Apps Native apps are developed for a certain mobile device operating system like Windows Phone or Android. Therefore, they are native for a certain device or platform. Apps built for Android, Windows Phone, Blackberry, Symbian cannot be used on any other platform expect on their own. Therefore, a mobile app designed for Android can only be used on an Android device. Advantages: good user experience; high performance; and, puts no limits on app usage. accessible form app stores Disadvantage: higher costs in comparison to other types of mobile apps 2. Hybrid Mobile Apps Hybrid mobile apps are specifically built using different multi-platform web technologies like JavaScript and HTML5. Hybrid apps are website applications created in a native wrapper that means they use elements of both native and web-based apps. Advantage: easy to develop since code base ensures low-cost maintenance Disadvantages: lack in speed, performance and overall optimization inability to look in the same way on different platforms 3. Web-Based Apps Web-based applications behave in very similar fashion to those native mobile apps. Web apps use a certain browser in order to run and they are commonly written in CSS, JavaScript or HTML5. Web apps redirect users to URL and further offer install options by creating a bookmark on their browser. Advantage: requires a minimum of device memory Users can access web apps from any device that is connected to the Internet. Disadvantages: the use of web applications with poor internet connection commonly results in very bad user experience. access to not so many APIs, with exception of geolocation and several others. a performance of web-based apps is inextricably linked to network connection and browser work. Importance of Mobile Apps Today, one of the greatest developments in technology is the invention of mobile applications. If you are a smartphone user, you must be familiar with mobile apps, and you must have different kinds of apps on your phone. One does not need any kind of professional training to use an app. Once you start using an app, you'll automatically learn how to use it. 1. Social Media Sites The youth of the 21st century is very attached and glued to social networking sites. It is a kind of emotio n for them. They cannot even spend a day without social media platforms. Social media platforms are a way to share pictures and videos. 2. 3. 4. 5. They are a great platform to share opinions and conversations; you can make a video call as well. Ordering Food Online If you're too lazy to go out and have some delicious food, then online food apps have got your back. In this modern world, where you h ave access to almost everything on the tip of your fingers, you can avail this minor facility as well. Taxi Services Now, you don't need to go out and search for yourself a taxi in the scorching heat. Because you've got the facility of online taxi services. Just book your taxi online, they will pick you up from your place and leave you to your destination. Booking Tickets With the help of apps, you can book the ticket for buses, trains, and airplanes as well. You don't need to stand and wait in long queues for your tickets to be booked. So basically, you have to assign your fingers some work and you can chill in your house. Entertainment Everyone wants entertainment right! Well, just like any other thing you can avail this facility as well on your mobile phone. If you want to watch a movie, apps are there to provide you that. If you want to watch an online series, then also apps are there to help. You just need to download the part app according to your choice and there you go, ready to do some rock and roll into your boring life. Reference: https://www.shapemyapp.com/blogs/what-is-the-importance-of-mobile-apps-in-daily-life Benefits of Mobile Apps 1. Convenience Convenience is one of the most critical aspects of a mobile app. The app is there to make things easier for the customer and not harder. 2. Interactivity Mobi-apps have their own interfaces that allow users to experience two-way immersive experience. 3. Personalization Users love highly tailored content according to their preferences. It’s like offering them a tailored communication in the language they speak and understand. User-centric personalization is critical in making their experience delightful. 4. Productivity Mobile apps increase employee engagement. Employees become more engaged when they're able to use modern tools that make their jobs easier and allow them to accomplish more. 5. Speed Mobile apps provide a much faster alternative than mobile web browsing. Web browsing requires a user to launch a web browser, enter a URL and wait for the site to load, whereas it only takes a second to launch a mobile app because the majority of the information is stored in the application itself making it possible to function offline. Factors to Consider to Build Mobile Apps 1. Multiple Platforms and Devices Traditional desktop and laptop PCs are Windows-based with a standard screen size, features and form factor. The mobile landscape is much more fragmented, with four main platforms (Android, iOS, Windows Phone and BlackBerry) that are continually evolving. 2. Screen Size Applications designed for a desktop or laptop client work with a screen size that far exceeds that of mobile devices. How to design for a device that fits in your pocket requires simplification and a rethink about navigation. 3. User Interaction Instead of a mouse and keyboard, there is a quite different mode for user input: touch. Even a single touch can involve a variety of interactions, including single-tap, double-tap, long touch, move and fling. All these actions have to be captured. 4. Screen Density Devices available from different manufacturers vary from 120 dpi for the lowerend HTC Tattoo / Wildfire and 240 dpi for the higher end Droid series—a difference of 100% in screen density. This means that using hardcoded values for pixels and a single set of images will lead to one of two things on a higher end phone: either your UI will be up-scaled and fuzzy, or the controls will be too small to allow comfortable targeting with a finger. 5. Integration with Phone Functions Smart phones are sophisticated communication devices. Making phone calls is their most basic function. While mobile platforms place many limitations on design and content, they also open up new opportunities that traditional desktops cannot provide. 6. Limited CPU/Memory/Battery Resources Mobile devices lack the computing power and memory capacity of most desktop and server systems. Developers need to write algorithms and perform code optimization to support the mobile device capacity. APPLICATION What is the impact/importance of mobile applications in our daily life? FEEDBACK Answer the following. Write your answer on the space provided. 1. An operating system (OS) built exclusively for mobile devices such as smartphones, tablets, PDAs, etc. Similar to a standard OS but is relatively simple and light. ________________________________ 2. An open source OS created by Google which one of the most commonly installed for mobile devices. ________________________________ For items 3-7, identify which benefit of Mobile Applications is being defined. 3. A mobile app user can access and share information anytime or anywhere. Internet connection is not required for most apps. ______________________________ 4. Modern user input goes beyond clicking and typing. Mobile apps are touch-based, allowing users to control the interface through pointer lock and drag and drop actions. ______________________________ 5. A user can change the settings of the mobile app based on his/her preferences. ______________________________ 6. The need to wait for loading information over a slow Internet connection is eliminated since information is stored within the mobile application. ______________________________ 7. Users can write, read, and present their reports using only their mobile phones. They can also manage their multimedia files and share to friends through social sites. ______________________________ 8. A type of application development that creates applications for a specific platform or device. It interact and/or take advantage of features and resources that are generally available for its parent platform. ______________________________ 9. These are specifically built using different multi-platform web technologies like JavaScript and HTML5. ______________________________ 10. Which of the following factors to consider in designing a Mobile Application? a. Platforms and Device Compatibility b. Screen Size ______________________________ c. User Interaction d. All of the choices TOPIC 3. 6. ARTIFICIAL INTELLIGENCE LEARNING OBJECTIVES At the end of the session, you will be able to: 1. Discuss Artificial Intelligence; 2. Explain the importance of Artificial Intelligence; 3. Manifest appreciation on the applications of Artificial Intelligence. ACTIVATING PRIOR LEARNING 1. What is Artificial Intelligence (AI)? 2. Site an application of Artificial Intelligence (AI). PRESENTATION OF CONTENT What is Artificial Intelligence (AI)? Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". As machines become increasingly capable, mental faculties once thought to require intelligence are removed from the definition. Artificial intelligence (AI) also refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. Categorization of Artificial Intelligence Artificial intelligence can be divided into two different categories: 1. Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI systems include video games such as the chess example from above and personal assistants such as Amazon's Alexa and Apple's Siri. You ask the assistant a question, it answers it for you. 2. Strong artificial intelligence systems are systems that carry on the tasks considered to be human-like. These tend to be more complex and complicated systems. They are programmed to handle situations in which they may be required to problem solve without having a person intervene. These kinds of systems can be found in applications like selfdriving cars or in hospital operating rooms. Advantages of Artificial Intelligence Artificial Intelligence is one of the emerging technologies which tries to simulate human reasoning in AI systems. John McCarthy invented the term Artificial Intelligence in the year 1950. Advantages: 1. Reduction in Human Error: The phrase “human error” was born because humans make mistakes from time to time. Computers, however, do not make these mistakes if they are programmed properly. With Artificial intelligence, the decisions are taken from the previously gathered information applying a certain set of algorithms. So errors are reduced and the chance of reaching accuracy with a greater degree of precision is a possibility. Example: In Weather Forecasting using AI they have reduced the majority of human error. 2. Takes risks instead of Humans: This is one of the biggest advantages of Artificial intelligence. We can overcome many risky limitations of humans by developing an AI Robot which in turn can do the risky things for us. Let it be going to mars, defuse a bomb, explore the deepest parts of oceans, mining for coal and oil, it can be used effectively in any kind of natural or man-made disasters. Example: Have you heard about the Chernobyl nuclear power plant explosion in Ukraine? At that time there were no AI-powered robots that can help us to minimize the effect of radiation by controlling the fire in early stages, as any human went close to the core was dead in a matter of minutes. They eventually poured sand and boron from helicopters from a mere distance. AI Robots can be used in such situations where intervention can be hazardous. 3. Available 24x7: An Average human will work for 4–6 hours a day excluding the breaks. Humans are built in such a way to get some time out for refreshing themselves and get ready for a new day of work and they even have weekly offed to stay intact with their work-life and personal life. But using AI we can make machines work 24x7 without any breaks and they don’t even get bored, unlike humans. Example: Educational Institutes and Helpline centers are getting many queries and issues which can be handled effectively using AI. 4. Helping in Repetitive Jobs: In our day-to-day work, we will be performing many repetitive works like sending a thanking mail, verifying certain documents for errors and many more things. Using artificial intelligence we can productively automate these mundane tasks and can even remove “boring” tasks for humans and free them up to be increasingly creative. Example: In banks, we often see many verifications of documents to get a loan which is a repetitive task for the owner of the bank. Using AI Cognitive Automation the owner can speed up the process of verifying the documents by which both the customers and the owner will be benefited. 5. Digital Assistance: Some of the highly advanced organizations use digital assistants to interact with users which saves the need for human resources. The digital assistants also used in many websites to provide things that users want. We can chat with them about what we are looking for. Some chatbots are designed in such a way that it’s become hard to determine that we’re chatting with a chatbot or a human being. Example: We all know that organizations have a customer support team that needs to clarify the doubts and queries of the customers. Using AI the organizations can set up a Voice bot or Chatbot which can help customers with all their queries. We can see many organizations already started using them on their websites and mobile applications. 6. Faster Decisions: Using AI alongside other technologies we can make machines take decisions faster than a human and carry out actions quicker. While taking a decision human will analyze many factors both emotionally and practically but AI-powered machine works on what it is programmed and delivers the results in a faster way. Example: We all have played Chess games in Windows. It is nearly impossible to beat CPU in the hard mode because of the AI behind that game. It will take the best possible step in a very short time according to the algorithms used behind it. 7. Daily Applications: Daily applications such as Apple’s Siri, Window’s Cortana, Google’s OK Google are frequently used in our daily routine whether it is for searching a location, taking a selfie, making a phone call, replying to a mail and many more. Example: Around 20 years ago, when we are planning to go somewhere we used to ask a person who already went there for the directions. But now all we have to do is say “OK Google where is Visakhapatnam”. It will show you Visakhapatnam’s location on google map and the best path between you and Visakhapatnam. 8. New Inventions: AI is powering many inventions in almost every domain which will help humans solve the majority of complex problems. Example: Recently doctors can predict breast cancer in the woman at earlier stages using advanced AI-based technologies. Disadvantages: 1. High Costs of Creation: As AI is updating every day the hardware and software need to get updated with time to meet the latest requirements. Machines need repairing and maintenance which need plenty of costs. It’s creation requires huge costs as they are very complex machines. 2. Making Humans Lazy: AI is making humans lazy with its applications automating the majority of the work. Humans tend to get addicted to these inventions which can cause a problem to future generations. 3. Unemployment: As AI is replacing the majority of the repetitive tasks and other works with robots, human interference is becoming less which will cause a major problem in the employment standards. Every organization is looking to replace the minimum qualified individuals with AI robots which can do similar work with more efficiency. 4. No Emotions: There is no doubt that machines are much better when it comes to working efficiently but they cannot replace the human connection that makes the team. Machines cannot develop a bond with humans which is an essential attribute when comes to Team Management. 5. Lacking Out of Box Thinking: Machines can perform only those tasks which they are designed or programmed to do, anything out of that they tend to crash or give irrelevant outputs which could be a major backdrop. Applications of Artificial Intelligence 1. Healthcare AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing. 2. Automotive Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. 3. Finance and economics Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. 4. Government Artificial intelligence in government consists of applications and regulation. Artificial intelligence paired with facial recognition systems may be used for mass surveillance. 5. Law-related professions Artificial intelligence (AI) is becoming a mainstay component of law-related professions. In some circumstances, this analytics-crunching technology is using algorithms and machine learning to do work that was previously done by entry-level lawyers. 6. Video games In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). 7. Military The United States and other nations are developing AI applications for a range of military functions. The main military applications of Artificial Intelligence and Machine Learning are to enhance C2, Communications, Sensors, Integration and Interoperability. 8. Hospitality In the hospitality industry, Artificial Intelligence based solutions are used to reduce staff load and increase efficiency by cutting repetitive tasks frequency, trends analysis, guest interaction, and customer needs prediction 9. Audit For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. 10. Advertising It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically. 11. Art Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. APPLICATION How can Artificial Intelligence help the following sectors? a. Education b. Agriculture FEEDBACK Write a 200 words essay on Artificial Intelligence. TOPIC 3. 7. DATA SCIENCE LEARNING OBJECTIVES At the end of the session, you will be able to: 1. Discuss Data Science; 2. Reflect on the importance and advantages of Data Science; 3. Manifest appreciation on the applications of Data Science. ACTIVATING PRIOR LEARNING From the image below, define data science. PRESENTATION OF CONTENT What is Data Science? In today’s world, data is being generated at an alarming rate. Every day, lots of data is generated by the users of Facebook or any other social media, or from the calls that one makes, or the data which is being generated from different organizations as well. Hector Garcia-Molina said that, “Data Science is a new term. But in the same sense as Columbus was discovered NEW Continent 1000 years ago.” Data Science is a multi-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data. It is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It also employs techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, and information science. Fourth Paradigm of Science Thousand of years – Empirical Few hundred of years – Theoretical Last fifty years - Computational - “Query the world” Last twenty years - eScience (Data Science) Importance of Data Science 1. Data science helps brands to understand their customers in a much enhanced and empowered manner. 2. It allows brands to communicate their story in such engaging and powerful manner. 3. Big Data is a new field that is constantly growing and evolving. 4. Its findings and results can be applied to almost any sector like travel, healthcare and education among others. 5. Data science is accessible to almost all sectors. Advantages and Disadvantages of Data Science The field of Data Science is massive and has its own fair share of advantages and limitations. Advantages: 1. It’s in Demand - Data Science is greatly in demand. Prospective job seekers have numerous opportunities. 2. Abundance of Positions - Data Science is a vastly abundant field and has a lot of opportunities. 3. A Highly Paid Career - Data Science is one of the most highly paid jobs. 4. Data Science is Versatile - There are numerous applications of Data Science. It is widely used in health-care, banking, consultancy services, and e-commerce industries. 5. Data Science Makes Data Better - Data Science deals with enriching data and making it better. 6. Data Scientists are Highly Prestigious - Companies rely on Data Scientists and use their expertise to provide better results to their clients. This gives Data Scientists an important position in the company. 7. No More Boring Tasks - Data Science has helped various industries to automate redundant tasks. Companies are using historical data to train machines in order to perform repetitive tasks. 8. Data Science Makes Products Smarter - Data Science involves the usage of Machine Learning which has enabled industries to create better products tailored specifically for customer experiences. 9. Data Science can Save Lives - Many other health-care industries are using Data Science to help their clients. 10. Data Science Can Make You A Better Person - Data Science will not only give you a great career but will also help you in personal growth. Disadvantages: While Data Science is a very lucrative career option, there are also various disadvantages to this field. 1. Data Science is Blurry Term - Data Science is a very general term and does not have a definite definition. 2. Mastering Data Science is near to impossible - While many online courses have been trying to fill the skill-gap that the data science industry is facing, it is still not possible to be proficient at it considering the immensity of the field. 3. Large Amount of Domain Knowledge Required - A person with a considerable background in Statistics and Computer Science will find it difficult to solve Data Science problem without its background knowledge. 4. Arbitrary Data May Yield Unexpected Results- A Data Scientist analyzes the data and makes careful predictions in order to facilitate the decision-making process. Many times, the data provided is arbitrary and does not yield expected results. 5. Problem of Data Privacy - The personal data of clients are visible to the parent company and may at times cause data leaks due to lapse in security. Applications of Data Science Data science helps in many of the applications like: 1. Health-care In the health-care industry, data science is making great leaps. The various industries in health-care making use of data science are 2. 3. 4. 5. 6. Medical Image Analysis Genetics and Genomics Drug Discovery Predictive Modeling for Diagnosis Health bots or virtual assistants Banking Banking is one of the biggest applications of Data Science. Big Data and Data Science have enabled banks to keep up with the competition. With Data Science, banks can manage their resources efficiently, furthermore, banks can make smarter decisions through fraud detection, management of customer data, risk modeling, real-time predictive analytics, customer segmentation, etc. E-commerce E-commerce and retail industries have been hugely benefitted by data science. Some of the ways in which data science has transformed the e-commerce industries are For identifying a potential customer base, data science is being heavily utilized. Usage of predictive analytics for forecasting the goods and services. Data Science is also used for identifying styles of popular products and predicting their trends. With data science, companies are optimizing their pricing structures for their consumers. Finance Data Science has played a key role in automating various financial tasks. Just like how banks have automated risk analytics, finance industries have also used data science for this task. Manufacturing In the 21st century, Data Scientists are the new factory workers. That means that data scientists have acquired a key position in the manufacturing industries. Data Science is being extensively used in manufacturing industries for optimizing production, reducing costs and boosting the profits. Transport In the transportation sector, Data Science is actively making its mark in making safer driving environments for the drivers. It is also playing a key role in optimizing vehicle performance and adding greater autonomy to the drivers. APPLICATION How would Data Science change the way of learning? FEEDBACK Write a 200 words essay on Data Science.