Sunday, November 24, 2013

Artificial Intelligence: Multiple ways in which it will affect our lives.

Robotics is a field which has boomed since the start of 21st century. Robots in modern days can mimic lots of human actions, for example, walking over an uneven surface, running, climbing stairs, dancing etc. The next step researchers are planning to achieve is to make robots think like humans. Artificial Intelligence is the field that is trying to make machines think analytically. If AI provides such a capability to robots, it will have a very big impact on humans. Lets look at few examples:
Driverless cars: Most of the trains and airplanes in present days are almost entirely controlled by computers. If trains can be driverless, then why not cars? Driverless cars can make our journey safe and take a correct decision in case of an emergency, as reaction time of a computer is much faster than a human. It also helps older people and physically challenged people tremendously in their commute.
Financial implications: AI software can study patterns in a stock market and can help investors tremendously. It will also be able to spot spending changes or credit card use and detect frauds with ease.
Medicine: Only intelligent devices these days can differentiate between life-saving medications and stale medications. If robots are made to think like humans, they can act as assistants to doctors where they not only can pass correct tools but also keep track of doctors’ preferences. May be one day into the future, machines will be capable of performing life saving operations.
Transhumanism: It is one of the extreme applications of AI to human life. It is a cultural and intellectual movement that believes we can use advanced technologies to improve human life. Some of its most important goals of it include, eliminating disabilities, diseases, or even life extension. Though it sounds next to impossible, but a time may come when life expectancy of humans increases to 150 years.
References:

Sunday, November 17, 2013

History of Computer Science: Von Neumann Architecture

Computers that were developed in the earlier era had fixed programs. These computers or devices are not completely obsolete. People use these machines because of their simplicity and for training students. You can take an example of a simple calculator; all it does is some basic mathematical operations. Can it do text processing? Absolutely not! How would you feel, if you had to reprogram your device every time your requirements changed or use different devices for different purposes? Tedious right! It led to the invention of stored-program computer.

Von Neumann Model, developed in 1945 is the base of a stored-program computer in which data and instructions are in the same electronic memory. This fact distinguishes it from the Harvard model which stores data and instructions in different memory. On a large scale this treatment of instructions as data is what makes compilers, assemblers and other automated programming model possible.  ENIAC discussed in earlier blog is a stored-program computer. Though Von Neumann Model was developed as early as in World War II, yet it is one of the most popular architecture even today. It has three main components: memory that stores instructions and data, a control unit and arithmetic and logic unit that moves the data and program in and out of memory along with executing these instructions, the last one being a bus through which data flows between the other components.

Von Neumann Model has its drawbacks and few mentioned are:
  • It performs inefficiently under modern pipelining architecture.
  • One program may break other’s programs, including the operating system and even crash the system.

Program modification by design or by accident can cause a serious bottleneck for this architecture. However, most of these problems can be alleviated by using branch prediction logic, using hybrid architecture by using a cache to separate data and instructions.  This architecture is one of the major milestones in computer science history and its simplicity has maintained its popularity all along.
References:                                                                                        

History of Computer Science: Electronic Numerical Integrator and Computer

The first milestone in the History of Computer Science was the invention of the abacus about 2000 years ago. Moving the beads accomplished several simple mathematical operations. Blaise Pascal is usually credited with building the first digital calculator in 1642. It performed addition operations to help his father who was a tax collector. The world’s first commercially successful calculator that could add, subtract, divide and multiply was built by Charles Xavier Thomas, a century later. Around the same time, Charles Babbage proposed the first general mechanical computer called the Analytical engine. It contained Arithmetic Logical Unit (ALU), basic flow control and an integrated memory. Thus began the evolution of computer science. Computers those days looked very different from those that we have today. Let’s take a look at one of them.

In 1945, University Pennsylvania came up with the first electronic computer called Electronic Numerical Integrator and Computer (ENIAC). Can you imagine a computer more than 1000 square-feet? Yes, ENIAC was that big with several fans to prevent overheating of the device. Programmers made use of punch cards or tape to feed program instructions. Though ENIAC could compute 5000 operations in seconds, yet it failed frequently because it consumed 150KW of power, leading to a rumour that lights dimmed in Philadelphia whenever it was switched on. It performed 385 multiplications per second, or forty division operations per second or three square root operations per second.

ENIAC
Were engineers crazy to construct such huge computers? In fact, they were extremely smart; it was their thoughts and ideas that shaped computer science as we know today. Gradual progressions in hardware and software then were accelerated not just due to the demand but due to inane human curiosity. Being a computer scientist myself, and a part of this evolution I can’t wait to contribute and advance this science even further!

References:

Sunday, November 10, 2013

File Sharing: Cloud based file sharing

We regularly backup our photos, music and data. Once backed up, it is incredibly easy and convenient to access these files remotely from our laptops, tablets, and smart phones. The well-known products that offer such services include Google Drive, Microsoft Sky Drive, Dropbox, Box, SugarSync and many others. Almost all of them offer the file storage service free of cost up to a certain storage limit. When individuals rely so much on these services it is natural for corporate IT departments to bring in such services into the workplace. Well it does have its pros and cons. 

Let us look at the pros first. Sharing files with multiple people is now a click of a button; it especially reduces the hassles with sharing image heavy presentations and videos. It reduces the costs required to set up a Virtual Private Network (VPN) and manage the file storage servers in an internal Data centre. It also helps with business continuity; a coffee spill on a laptop or any other natural catastrophes does not cause any loss of data. Since it’s a shared file, updating is really easy and everyone is guaranteed to get the latest version of the file. More importantly, companies need not educate their employees on how to use these services as they use the same in their personal lives.

However, there is increased complexity to manage the files outside of the internal data centre. Security is a greatest concern; a malicious hacker can gain access to the entire database at once. A disgruntled employee can copy the entire data into his personal cloud, or corrupt the central database. If companies are relying on a third party cloud then is their data really secure? The third party has complete access the all the data. Downtime of the cloud due to a virus, surrounding weather or power outages are serious concerns.

Fortunately, it is possible to work around the negatives with well drafted and enforced policies. Firstly, rely on cloud providers with security certifications. Have access restrictions in place, so that not everyone has access to each and every file. Create a culture in the company that ensures a sense of responsibility among every employee about confidential corporate data.

Finally, cloud file sharing is a boon, whose advantages weigh far more and the disadvantages can be overcome with proper workarounds. The onus is on every one of us.

References:

Monday, November 4, 2013

Data Structures: AVL trees

We know that data structure in computer science refers to the way in which we organize and store data such that it can be used in an efficient manner. You must have heard of stacks, queues, linked lists, trees to organize data. All these are examples of data structures that have advantages and handles the basic operations insert, delete and search at varying degrees of complexity. Tree is an abstract data structure. Many kinds of tree data structures exist. To name a few, we have binary search trees(BST), red black trees, B trees, B+ trees, AVL trees and so on.

What are AVL trees?
Also known as self-balancing binary search trees, these were the first data structure of this kind to be invented. BST is the simplest tree data structure; however, some data sets can make the tree unbalanced which increases the complexity of basic operations. A balanced tree is one in which difference between the height of left sub-tree and right sub-tree is not greater than one.

AVL trees follow the same rules that apply to the binary search tree, hence it is simple too. However, AVL trees require some additional operations that keep them balanced and these operations are called tree rotations. The acceptable height difference between the left and the right sub-tree at any point in the AVL tree should be -1, 0 or 1. Hence, after every Insertion and deletion operation on these trees we need to handle these rotation operations to keep the tree balanced. Refer to this if you want to learn how insertion, deletion and searching work. The image shows how insertion can be performed.

Why people prefer these trees?
The time complexity of the AVL tree for insertion, deletion and search is O(logn) which makes this one of the most efficient trees for retrieving stored data.

References: