Sunday, December 15, 2013

Scientific Computing: Bioinformatics and Computational Biology

Bioinformatics is an interdisciplinary field with biology, computer science, statistics and information technology at its core.

A colossal amount of data is generated by biological experiments. When biologists need tools to analyse, store and interpret this data, Bioinformatics comes for the rescue. The job of a bioinformatician entails writing algorithms, creating user interface, creating databases and help molecular biologists in interpreting the computed data.
Nussinov's Algorithm - Traceback Step
In my basic and advanced courses of bioinformatics, we were taught about dynamic programming algorithms, namely, Nussinov’s algorithm for RNA secondary structure prediction, Viterbi algorithm for Decoding of HMM and UPGMA for phylogenetic tree construction. Though these algorithms are accurate, they are very slow for large data sets. Heuristic algorithms were used instead of dynamic programming algorithms as they gave approximate but fast solutions.

Protein Structure
There are tons of algorithms to solve any problem in any other field. However, it is extremely difficult to solve a biology problem because, the living system is so complex and divergent. It becomes almost impossible to device a perfect algorithm. For example, finding the actual representation of the protein structure and all its function is one of the biggest challenges faced by bioinformaticians today.


A myriad of packages for predicting genes (Ex: GenScan), for aligning sequences (Ex: BLAST, CLUSTALW and CLUSTAL Omega), constructing phylogenetic trees are available, but none of these give an accurate solution for all types of problems. Though researchers are facing new problems every day, they are being helped immensely by computer scientists. It will certainly take a number of years to solve complicated problems, but we must remain dedicated to this field since it offers great hope in solving diseases like HIV, Cancer, Alzheimer’s etc that plague the society today.

References:
Dr. Sami Khuri – Bioinformatics CS123A and CS223

Sunday, December 8, 2013

Computer Graphics: Computer-generated imagery (CGI)

CGI is an application of Computer Graphics to create or contribute to images in art, printed media, video games, films, commercials, etc. The visual scenes may be static or dynamic and may be 2D or 3D. Think of it this way, CGI attempts to do with numbers, what a camera does with light. Our world is bound by many physical constraints, but with CGI we can create many things that are impossible in reality. One may wonder what would be the advantage of such a tool; let’s go over some of them.
Medical Visualizations

In medicine, Computer generated anatomical models are used for both instructional and operational purposes. Using CGI, a three dimensional image can be created from a bunch of single slice X-rays which can help in a speedy and accurate diagnosis. In modern medical applications patient specific models are constructed in “computer assisted surgery”. For example, in a total knee replacement surgery, such a model can be used to carefully plan a surgery.

Conceptual rendering of a Planetary Resources spacecraft
preparing to capture asteroid
CGI is also used in courtrooms to help judges and jury to better visualize the sequence of events, evidence or hypothesis. CGI is used in product advertisements; CGI can produce images with no greasy fingerprints and are unmarred by dust, just perfect in every sense.  CGI is also used for scientific visualization to present meteorological data, medical imaging, architecture and technology.

Avatar pushed CGI to a whole new level
CGI is used in movies to create artificial reality which is very close and in many cases better than actual reality. This certainly helps in cutting down expenses. For example, in the movie Titanic, a miniature model of the ship was created and extrapolated to create a real world effect.

Thus, CGI has opened a new world full of possibilities and it is up to us to explore and reap the benefits.

References:
http://blogs.voanews.com/science-world/2012/04/25/titanic-director-backs-venture-to-mine-platinum-from-asteroids/
http://computerstories.net/a-computer-generated-imagery-cgi-history/

Sunday, December 1, 2013

Communication and Security: Transport Layer Security (TLS)

Transport Layer Security is a layer 4 protocol that is based on Secure Sockets Layer (SSLv3). It makes use of Public Key Infrastructure (PKI) which provides user authentication and hence provides confidentiality. This protocol mainly prevents packet sniffing, forgery and tampering.

Communication between client and server
Since, TLS uses PKI, it provides two types of authentication, namely, mutual authentication and server authentication. If a highly secure communication is required, then mutual authentication comes in handy even though it is very computationally expensive due to public key encryption. Server side authentication is the one we commonly see these days, like HTTPS. Though this type of authentication provides mid-level security, it is preferred for normal systems, since it reduces the computation cost involved with PKI.

What are some of the pros and cons of using TLS?
Pros:
  • It is a recommended security mechanism specified by IETF.
  • TLS supports network address translation (NAT) traversal at the protocol layer.
  • It ensures privacy.
  • It supports user authentication which is very much preferred in e-commerce solutions such as online banking.
  • Easier porting to multiple hardware architectures since TLS is implemented at the application level and not at the kernel level.

Cons:
  • Mutual and server side authentications require PKI operations. Using PKI makes a system very complex.
  • PKI is computationally costly.
  • Only one side is authenticated in server side authentication.
  • TLS can’t guarantee security for Voice over IP RTP media streams.
  • TLS runs on TCP only and not on UDP.


When should we use it?
The only bottleneck with TLS is the public key infrastructure features. TLS is the best choice when a system requires highly secure authentication mechanisms even at the cost of slower sessions and additional complexity. The system that has this type of requirements is fundamentally used in online banking and e-commerce solutions.

References:

Sunday, November 24, 2013

Artificial Intelligence: Multiple ways in which it will affect our lives.

Robotics is a field which has boomed since the start of 21st century. Robots in modern days can mimic lots of human actions, for example, walking over an uneven surface, running, climbing stairs, dancing etc. The next step researchers are planning to achieve is to make robots think like humans. Artificial Intelligence is the field that is trying to make machines think analytically. If AI provides such a capability to robots, it will have a very big impact on humans. Lets look at few examples:
Driverless cars: Most of the trains and airplanes in present days are almost entirely controlled by computers. If trains can be driverless, then why not cars? Driverless cars can make our journey safe and take a correct decision in case of an emergency, as reaction time of a computer is much faster than a human. It also helps older people and physically challenged people tremendously in their commute.
Financial implications: AI software can study patterns in a stock market and can help investors tremendously. It will also be able to spot spending changes or credit card use and detect frauds with ease.
Medicine: Only intelligent devices these days can differentiate between life-saving medications and stale medications. If robots are made to think like humans, they can act as assistants to doctors where they not only can pass correct tools but also keep track of doctors’ preferences. May be one day into the future, machines will be capable of performing life saving operations.
Transhumanism: It is one of the extreme applications of AI to human life. It is a cultural and intellectual movement that believes we can use advanced technologies to improve human life. Some of its most important goals of it include, eliminating disabilities, diseases, or even life extension. Though it sounds next to impossible, but a time may come when life expectancy of humans increases to 150 years.
References:

Sunday, November 17, 2013

History of Computer Science: Von Neumann Architecture

Computers that were developed in the earlier era had fixed programs. These computers or devices are not completely obsolete. People use these machines because of their simplicity and for training students. You can take an example of a simple calculator; all it does is some basic mathematical operations. Can it do text processing? Absolutely not! How would you feel, if you had to reprogram your device every time your requirements changed or use different devices for different purposes? Tedious right! It led to the invention of stored-program computer.

Von Neumann Model, developed in 1945 is the base of a stored-program computer in which data and instructions are in the same electronic memory. This fact distinguishes it from the Harvard model which stores data and instructions in different memory. On a large scale this treatment of instructions as data is what makes compilers, assemblers and other automated programming model possible.  ENIAC discussed in earlier blog is a stored-program computer. Though Von Neumann Model was developed as early as in World War II, yet it is one of the most popular architecture even today. It has three main components: memory that stores instructions and data, a control unit and arithmetic and logic unit that moves the data and program in and out of memory along with executing these instructions, the last one being a bus through which data flows between the other components.

Von Neumann Model has its drawbacks and few mentioned are:
  • It performs inefficiently under modern pipelining architecture.
  • One program may break other’s programs, including the operating system and even crash the system.

Program modification by design or by accident can cause a serious bottleneck for this architecture. However, most of these problems can be alleviated by using branch prediction logic, using hybrid architecture by using a cache to separate data and instructions.  This architecture is one of the major milestones in computer science history and its simplicity has maintained its popularity all along.
References:                                                                                        

History of Computer Science: Electronic Numerical Integrator and Computer

The first milestone in the History of Computer Science was the invention of the abacus about 2000 years ago. Moving the beads accomplished several simple mathematical operations. Blaise Pascal is usually credited with building the first digital calculator in 1642. It performed addition operations to help his father who was a tax collector. The world’s first commercially successful calculator that could add, subtract, divide and multiply was built by Charles Xavier Thomas, a century later. Around the same time, Charles Babbage proposed the first general mechanical computer called the Analytical engine. It contained Arithmetic Logical Unit (ALU), basic flow control and an integrated memory. Thus began the evolution of computer science. Computers those days looked very different from those that we have today. Let’s take a look at one of them.

In 1945, University Pennsylvania came up with the first electronic computer called Electronic Numerical Integrator and Computer (ENIAC). Can you imagine a computer more than 1000 square-feet? Yes, ENIAC was that big with several fans to prevent overheating of the device. Programmers made use of punch cards or tape to feed program instructions. Though ENIAC could compute 5000 operations in seconds, yet it failed frequently because it consumed 150KW of power, leading to a rumour that lights dimmed in Philadelphia whenever it was switched on. It performed 385 multiplications per second, or forty division operations per second or three square root operations per second.

ENIAC
Were engineers crazy to construct such huge computers? In fact, they were extremely smart; it was their thoughts and ideas that shaped computer science as we know today. Gradual progressions in hardware and software then were accelerated not just due to the demand but due to inane human curiosity. Being a computer scientist myself, and a part of this evolution I can’t wait to contribute and advance this science even further!

References:

Sunday, November 10, 2013

File Sharing: Cloud based file sharing

We regularly backup our photos, music and data. Once backed up, it is incredibly easy and convenient to access these files remotely from our laptops, tablets, and smart phones. The well-known products that offer such services include Google Drive, Microsoft Sky Drive, Dropbox, Box, SugarSync and many others. Almost all of them offer the file storage service free of cost up to a certain storage limit. When individuals rely so much on these services it is natural for corporate IT departments to bring in such services into the workplace. Well it does have its pros and cons. 

Let us look at the pros first. Sharing files with multiple people is now a click of a button; it especially reduces the hassles with sharing image heavy presentations and videos. It reduces the costs required to set up a Virtual Private Network (VPN) and manage the file storage servers in an internal Data centre. It also helps with business continuity; a coffee spill on a laptop or any other natural catastrophes does not cause any loss of data. Since it’s a shared file, updating is really easy and everyone is guaranteed to get the latest version of the file. More importantly, companies need not educate their employees on how to use these services as they use the same in their personal lives.

However, there is increased complexity to manage the files outside of the internal data centre. Security is a greatest concern; a malicious hacker can gain access to the entire database at once. A disgruntled employee can copy the entire data into his personal cloud, or corrupt the central database. If companies are relying on a third party cloud then is their data really secure? The third party has complete access the all the data. Downtime of the cloud due to a virus, surrounding weather or power outages are serious concerns.

Fortunately, it is possible to work around the negatives with well drafted and enforced policies. Firstly, rely on cloud providers with security certifications. Have access restrictions in place, so that not everyone has access to each and every file. Create a culture in the company that ensures a sense of responsibility among every employee about confidential corporate data.

Finally, cloud file sharing is a boon, whose advantages weigh far more and the disadvantages can be overcome with proper workarounds. The onus is on every one of us.

References:

Monday, November 4, 2013

Data Structures: AVL trees

We know that data structure in computer science refers to the way in which we organize and store data such that it can be used in an efficient manner. You must have heard of stacks, queues, linked lists, trees to organize data. All these are examples of data structures that have advantages and handles the basic operations insert, delete and search at varying degrees of complexity. Tree is an abstract data structure. Many kinds of tree data structures exist. To name a few, we have binary search trees(BST), red black trees, B trees, B+ trees, AVL trees and so on.

What are AVL trees?
Also known as self-balancing binary search trees, these were the first data structure of this kind to be invented. BST is the simplest tree data structure; however, some data sets can make the tree unbalanced which increases the complexity of basic operations. A balanced tree is one in which difference between the height of left sub-tree and right sub-tree is not greater than one.

AVL trees follow the same rules that apply to the binary search tree, hence it is simple too. However, AVL trees require some additional operations that keep them balanced and these operations are called tree rotations. The acceptable height difference between the left and the right sub-tree at any point in the AVL tree should be -1, 0 or 1. Hence, after every Insertion and deletion operation on these trees we need to handle these rotation operations to keep the tree balanced. Refer to this if you want to learn how insertion, deletion and searching work. The image shows how insertion can be performed.

Why people prefer these trees?
The time complexity of the AVL tree for insertion, deletion and search is O(logn) which makes this one of the most efficient trees for retrieving stored data.

References:

Sunday, October 27, 2013

Hacking: Who are hackers?

When we hear the word ‘hacker’, what is the first thing that comes to our mind? They are bad people! We think that hackers are the ones who attack computers and steal information and so on. To some extent it is true, there are such people. But they make a very small portion of all the hackers. There are hackers with good intentions called ethical hackers. They find out issues with the software and let the software owner know the problem before an unethical hacker exploits it.

Hackers are the ones who look at computers very differently than the rest of others. It is their thinking ability and vision that leads to real invention. In fact, companies like Google, Microsoft and Facebook gives bounties in tens of thousands of dollars for ethical hackers who find critical vulnerabilities.

Many security and network protocols maintain open standards, so that researchers and other ethical hackers can find out any problems with the protocol. The prime example here can be public key encryption protocol. This protocol is security critical and is used to to protect our banking, business and e-commerce data. The underlying algorithm is public, but is still hard to crack as it requires immense computational power. Till date, people have not found any backdoor to this security protocol and millions are being spent to make sure none such exist, or if they exist the good guys find them first.

Well, there are malicious hackers too. For example, the computer virus CryptoLocker, recently in news which encrypts all the data in the computer and will give you the decryption key only if you pay some money.

In reality, most hackers want to learn the complexity of the computer world and solve real complex problems. Hope such hackers gain numbers.

Sources:
http://www.forbes.com/sites/jameslyne/2013/10/22/computer-virus-spreading-that-means-you-never-get-to-see-your-files-again/

Sunday, October 13, 2013

Open Source: Vim

We are extremely familiar with the popular Windows editor, Notepad. Similarly, Vi is a popular choice as an editor in Linux/Unix world. Vi is a basic text editor with powerful and non-fancy features. "Vim" is an acronym for "Vi IMproved". Vim has yet more features than Vi, including mouse support, graphical versions, visual mode, many new editing commands. Vim has become one of the most commonly used text editors by programmers; I am one of them too! It was written by Bram Moolenaar and was made publicly available in 1991. Vim is free and open source software.

There is a built-in tutorial for beginners in Vim, which can be accessed by vimtutor command. A user manual too is available and it has the help feature that will walk us through the commands.  Vim basically has two modes edit mode and command mode. Vim has a vast array of commands and features.
Anyone can customize Vim, right from user interface and macros to user defined functions. Any functionality can be added or extended using vimscript, Vim‘s internal scripting language. It also supports other scripting languages like Perl, Python, Ruby, Lua, Tcl and Racket.

Vim has definitely improved a lot over Vi. Vim is compatible with Vi, however it is not 100%. Few enhancements of Vim include auto-completion, comparison between files and revision control of files, Yes! I have tried these file operations. It has a rich plugin support. Various compression formats like gzip, bzip2, zip and tar format are supported. It also supports remote editing over network protocols like HTTP and SSH, session state preservation, split (horizontal and vertical) and tabbed window support. It provides multi-language support (Unicode), visual mode and many more.

Lastly, few of the platforms that Vim is supported on include UNIX, Linux, BSD, Mac OS, IBM OS-2, Microsoft Windows versions, etc. One thing you should really appreciate about Vim is that, apart from providing rich and powerful features it also encourages users to consider donating to children in Uganda, as it is released under the charityware clause.

Source:

Sunday, October 6, 2013

Agile: Various Methodologies for Software Development

There are various agile methodologies for software development. Most of them share the same philosophy, characteristics and practices. But, each one is different from an implementation point of view. Here are few claims of the main methodology which I find interesting.

Scrum - Here, Product Owner works closely with the team to figure out and slice the system functionality that forms the “Product Backlog”. Features, bug fixes, non-functional requirements and so on dwell in the Product Backlog. Each slice is a customer deliverable. Slices are prioritized as “potentially shippable increments” of software. Cross-functional teams estimate and sign-up to deliver a slice, which can be divided into tasks, in each sprint. The sprint length can be defined by the team. So, next sprint will have the next set of prioritized tasks and hence, it will continue.

Feature Driven Development - FDD is again a model-driven, short-iteration process. The features are small and useful. It follows eight practices for the feature delivery. Unlike other agile methodologies, FDD has short specific phases of work for each feature which include Domain Walkthrough, Design, Design review, Code, Code review and Promote to Build.

These were few processes of the main methodology, but, there are many more like Lean and Kanban, Extreme Programming, Crystal, and Dynamic Systems Development Method and so on. Usually, any team does not follow strict rules of agile development; there is always some amount of flexibility induced. This will help the team to perform effectively and have the deliverable ready even before the customer can know it.

Sources:

Friday, September 20, 2013

LinkedIn and Branding: Market yourself

Well, we all know about LinkedIn. Don’t we? That is the reason we sit and update it so carefully, when we start looking for jobs. Is that the only time we should be updating it? Why not do it on regular basis like we update our status on Facebook or Twitter? It is obvious that you do not need to update it so frequently. But yes, you always need to refurbish your profile regularly, as your profile is the one that speaks for you. Will updating my profile really help? Yes, it sure will, but the results are more profound when you advertise yourself in a creative and innovative manner.

Diving deeper, how does LinkedIn help in marketing yourself? It is not just an online resume. It is the place where all the corporate recruiters can see your accomplishments. It is basically a marketing campaign in which you are the product. Your profile is your portfolio.

What if I don’t have a portfolio? It is never too late to create it. A power point presentation including all your accomplishments is a starter. Get ideas! What is internet for? Ask experienced people. You need to create a respectable brand for yourself.

How to create your own brand? Well, it is simple, showcase everything you have got. Include the type of challenges you faced, the way you approached and solved it. Add the link to your articles or research papers if you have written any or the presentations you have given. LinkedIn creates an impression on a person looking at your profile – Make sure it is the best!

What boosts your brand? Recommendation letters from the right people is a big YES. Recommenders are the people who have actually worked with you closely and have gauged your potential. If they write a few good words about you and your strengths, that can add a lot of value. There is also a section in which people can endorse you for specific skills. Make sure your strengths have good endorsement.

Resume is not the only way to show your achievements. You can do a lot more using LinkedIn. It has become a great source for employers seeking talented candidates. Make sure you have a profile that stands out!

Sources:

Friday, September 13, 2013

QR codes: Past, Present and Future.


What is a QR code?

This is a first question that comes to anyone’s mind who has never heard of it. QR code, Is that a type of barcode? What is the difference between a barcode and a QR code?

QR code is quick response code. Standard barcodes keep track of inventory. A QR code on the other hand is used for technological purposes, basically to get more information about the product. Unlike barcodes which represent numerical data, QR codes can represent numeric, alphanumeric, byte/binary, Kanji symbols and many more. These are used extensively for marketing and advertisement. There is no data a QR code cannot store. These codes can be scanned easily by using scanner apps available on Smartphones. Whenever a code is scanned, a URL or phone number appears immediately. Though the standard looks of QR codes are black and white, customized codes are becoming more popular these days.

Past:
QR codes are new in the field of technology. This code system was invented in 1994 by Toyota’s subsidiary Denso Wave. They used these codes to track the vehicle parts during manufacture. Initially, they were widely used in Japan but now, they have gained enough popularity in other countries.

Present:
Due to their ability to store huge capacities of data in different forms and speed they have flourished beyond the automobile industries. In recent times, Smartphones have scanners that scan the code and the information linked to it appears on the phones instantly.

Future:
QR codes are being used frequently in society, but in few years they will be used everywhere as a way to provide more information to consumers. It is estimated that large population would have learnt about QR codes and make use of them to gain needed information. Eventually, this will lead to rise of smartphones and technical sales and economy. People believe that the use of QR codes will push the technological limits far beyond the extremely technical world we live in today.

We discussed about the past, present and future and understood the importance of using QR codes. But, a system cannot be perfect. SECURITY is always a major issue. How can security be an issue with QR codes? We have seen that QR codes can be used everywhere. Let us say you see an advertisement of a new restaurant with its QR code that will take you to their website once you scan the code. An attacker can easily paste different sticker on top of the poster. When you scan the code, it will directly take you to the attacker’s website and might download malicious files without you even realizing it. Consequences of security breach can be really bad. So, what can you do to be safe? One simple solution would be to make sure your QR code reader has the ability to show you where it is taking you. Also, it gives you the flexibility to choose if you want to visit the site or not. There are many more code readers which address different security issues. Get the one that is most secure yet serves the purpose.

Sources:

Friday, September 6, 2013

Social Media in Business: Brand and Security

Business these days heavily depend on information flow. Employees are more productive when they can make better decisions and save time. This is possible when they have the much needed feedback about their product. Social network has recreated the bond between companies, employees, customers and suppliers, shortening the process from months to few hours. Social media is often best used by companies to fortify their brands, gain loyalty of their customers and to potentially augment their market share. 

How to attain social media success in your business?
Walmart’s director gives five steps that your business should follow:
  1. Determine your value: Brands need to think about social media as a way to deliver value rather than as a tool. Rather than using social media for mere product gains, Walmart communicated about its sustainability efforts, hence delivering value, through its twitter accounts @WalmartGreen, @WalmartHealthy, @WalmartGiving and @WalmartAction, among others.
  2. Audience comes first. Know them well: Use marketing tools to understand people better, profile them and see what interests them, how committed and active are they with those interests. 
  3. Deliver good content: We share content that we think people like and we find out whether or not they actually like it. Initially for few months, we launch community-based Twitter to advertise the product. Once we get valuable feedback we get to know if we are delivering good content. The popularity of the product among people itself will get us a larger audience rather than more promotion.
  4. Find clever metrics: It is easy for marketing department to evaluate the success of a social media campaign. But you need to get creative here. They have to track more positive metrics such as how often a post is retweeted or made favorite.
  5. Use your data well: We use the data collected to know our customers better. This will help in building a meaningful relationship with the customers.

There are many models that suit different businesses. Choose the one that best suits you wisely and follow it but be ready to be innovative because change is constant.


Social media generates many opportunities along with which it also gives rise to many challenges, the most likely being data security, privacy concerns and brand and reputation damage. So what could be the anticipated risks with these challenges?  Security and privacy concerns could be: Identity theft, data retention, technical exploits like malware, viruses/worms, etc which leads to brand damage. How exactly a brand damage happens? This could be when you post hostile remarks or any classified information on a public site, calumniation, and violation of the rights secured by the copyright. 

What about brand promotion?
Making consumers aware is the best way to promote your brand.  New social networking sites are coming up every other day. Learn about the best ones to keep your consumer informed. You can promote your brand by being more effective. How exactly can that be done? One way is by interacting more with customers. Facebook is a social media to interact with customers. Like and comment on others pages. The more you do it, the more they will comment on yours. Reciprocation is the real secret to building social network which in turn helps you in promoting your brand.  Facebook is one media. You could tweet about your products; use visual appeal via instagram and many more. Find the right media to do it. But be aware of the information shared on your pages. Make sure it does not hurt your brand. So try to incorporate privacy concerns.

So, how to handle these privacy concerns?
One possibility is to use closed social networks. This will inspire employees to work more openly; they support the need for privacy. Private messages could be targeted to specific audience and private groups can be created for defensive, on-going conversations. In this way, social network is open, with an option to have a private conversation as needed.  

Sources:
http://www.entrepreneur.com/article/226753
http://www.ey.com/Publication/vwLUAssets/Protecting_and_strengthening_your_brand_Social_media_governance_and_strategy/$FILE/Insights_on_IT_risk_Social_media.pdf
http://www.isaca.org/chapters2/kampala/newsandannouncements/Documents/Social_media_UTAMU_2.pdf
http://www.cio.com/article/735777/5_Secrets_to_Corporate_Social_Media_Success?page=2&taxonomyId=3004
http://sparkandhustle.com/takeaway-tips/using-social-media-to-promote-your-brand-grow-your-network-generate-revenue/
http://www.wisegeek.com/what-is-brand-promotion.htm 
http://blog.bufferapp.com/10-surprising-social-media-statistics-that-will-make-you-rethink-your-strategy

Friday, August 30, 2013

Pilot


Welcome to Santrupti's world! In my blog, you will find some interesting things about Algorithms and data structures and their use in the field of biology. Yes! You heard it right. Welcome to Bioinformatics.

Why Bioinformatics ?

I took bioinformatics course out of curiosity to try out something other than programming. Did you know that a single cell stores a huge amount of data that defines a person? Quite perplexing right! Well this is how important DNA can be. Working with a biologist, I learnt about the humongous amounts of data generated by sequencing. Mining this data and analyzing it is a highly complex problem to solve. It requires an intelligent software that can effectively manage the memory and processing power. Even with this, it takes hours and some times days for the program to generate the required analysis. Now this is where my strengths come in handy. I have been involved in programming in C for quite some time now. I have played with other programming languages like  C++, Java, C# and Perl during my undergraduate days. Bioinformatics is one course where you get to learn computer science, statistics and biology, a rare combination! 


So what is it that I want to do?


Looking at the real challenges in bioinformatics, the one that prods me the most is integrating the data generated and developing models of complex systems. Basically, simulation modelling and prediction. Prediction of ? The susceptibility of a baby just born, to a particular disease in the future. Well, that's the goal. But as an amateur computer scientist and a bioinformatician, I want to make prediction of proteins and other important sites in a gene, so as to further the research of fellow biologists. Hopefully we will be able to find cures to some of the chronic diseases we face today.