Thursday, November 6, 2008

Web Servers

A computer program that is responsible for accepting HTTP requests from web clients, which are known as web browsers, and serving them HTTP responses along with optional data contents, which usually are web pages such as HTML documents and linked objects (images, etc.).


Common features

Although web server programs differ in detail, they all share some basic common features.
1. HTTP: every web server program operates by accepting HTTP requests from the client, and providing an HTTP response to the client. The HTTP response usually consists of an HTML document, but can also be a raw file, an image, or some other type of document (defined by MIME-types). If some error is found in client request or while trying to serve it, a web server has to send an error response which may include some custom HTML or text messages to better explain the problem to end users.
2. Logging: usually web servers have also the capability of logging some detailed information, about client requests and server responses, to log files; this allows the webmaster to collect statistics by running log analyzers on log files.

In practice many web servers implement the following features also:

1. Authentication, optional authorization request (request of user name and password) before allowing access to some or all kind of resources.
2. Handling of static content (file content recorded in server's filesystem(s)) and dynamic content by supporting one or more related interfaces (SSI, CGI, SCGI, FastCGI, JSP, PHP, ASP, ASP.NET, Server API such as NSAPI, ISAPI, etc.).
3. HTTPS support (by SSL or TLS) to allow secure (encrypted) connections to the server on the standard port 443 instead of usual port 80.
4. Content compression (i.e. by gzip encoding) to reduce the size of the responses (to lower bandwidth usage, etc.).
5. Virtual hosting to serve many web sites using one IP address.
6. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
7. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.

Origin of returned content

The origin of the content sent by server is called:
• static if it comes from an existing file lying on a filesystem;
• dynamic if it is dynamically generated by some other program or script or Application Programming Interface called by the web server.
Serving static content is usually much faster (from 2 to 100 times) than serving dynamic content, especially if the latter involves data pulled from a database.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL) into:
• a local file system resource (for static requests);
• an internal or external program name (for dynamic requests).
For a static request the URL path specified by the client is relative to the Web server's root directory.
Consider the following URL as it would be requested by a client:
http://www.example.com/path/file.html
The client's web browser will translate it into a connection to www.example.com with the following HTTP 1.1 request:
GET /path/file.html HTTP/1.1
Host: www.example.com
The web server on www.example.com will append the given path to the path of its root directory. On Unix machines, this is commonly /var/www/htdocs. The result is the local file system resource:
/var/www/htdocs/path/file.html
The web server will then read the file, if it exists, and send a response to the client's web browser. The response will describe the content of the file and contain the file itself.

Load limits
A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 60,000, by default between 500 and 1,000) per IP address (and IP port) and it can serve only a certain maximum number of requests per second depending on:
• its own settings;
• the HTTP request type;
• content origin (static or dynamic);
• the fact that the served content is or is not cached;
• the hardware and software limits of the OS where it is working.
When a web server is near to or over its limits, it becomes overloaded and thus unresponsive.

Overload causes
At any time web servers can be overloaded because of:
• Too much legitimate web traffic (i.e. thousands or even millions of clients hitting the web site in a short interval of time. e.g. Slashdot effect);
• DDoS (Distributed Denial of Service) attacks;
• Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
• XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
• Internet web robots traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
• Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
• Web servers (computers) partial unavailability, this can happen because of required or urgent maintenance or upgrade, HW or SW failures, back-end (i.e. DB) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptoms
The symptoms of an overloaded web server are:
• requests are served with (possibly long) delays (from 1 second to a few hundred seconds);
• 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned);
• TCP connections are refused or reset (interrupted) before any content is sent to clients;
• in very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques
To partially overcome above load limits and to prevent overload, most popular web sites use common techniques like:
• managing network traffic, by using:
o Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
o HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
o Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
• deploying web cache techniques;
• using different domain names to serve different (static and dynamic) content by separate Web servers, i.e.:
o http://images.example.com
o
o http://www.example.com
o
• using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings;
• using many Web servers (programs) per computer, each one bound to its own network card and IP address;
• using many Web servers (computers) that are grouped together so that they act or are seen as one big Web server, see also: Load balancer;
• adding more hardware resources (i.e. RAM, disks) to each computer;
• tuning OS parameters for hardware capabilities and usage;
• using more efficient computer programs for web servers, etc.;
• using other workarounds, especially if dynamic content is involved.



Historical notes

In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear Research) a new project, which had the goal of easing the exchange of information between scientists by using a hypertext system. As a result of the implementation of this project, in 1990 Berners-Lee wrote two programs:
• a browser called WorldWideWeb;
• the world's first web server, later known as CERN HTTPd, which ran on NeXTSTEP.
Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among lots of different social groups of people, first in scientific organizations, then in universities and finally in industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process


source : Wikipedia

Sunday, October 12, 2008

IPv6

Internet Protocol version 6 (IPv6) is an Internet Layer protocol for packet-switched internetworks. IPv4 is currently the dominant Internet Protocol version, and was the first to receive widespread use. The Internet Engineering Task Force (IETF) has designated IPv6 as the successor to version 4 for general use on the Internet.
IPv6 has a much larger address space than IPv4, which provides flexibility in allocating addresses and routing traffic. The extended address length (128 bits) is intended to eliminate the need for network address translation to avoid address exhaustion, and also simplifies aspects of address assignment and renumbering, when changing Internet connectivity providers.
The very large IPv6 address space supports 2128 (about 3.4×1038) addresses, or approximately 5×1028 (roughly 295) addresses for each of the roughly 6.5 billion (6.5×109) people alive today. In a different perspective, this is 252 addresses for every observable star in the known universe more than ten billion billion billion times as many addresses as IPv4 (232) supported.
While these numbers are impressive, it was not the intent of the designers of the IPv6 address space to assure geographical saturation with usable addresses. Rather, the large number allows a better, systematic, hierarchical allocation of addresses and efficient route aggregation. With IPv4, complex Classless Inter-Domain Routing (CIDR) techniques were developed to make the best use of the small address space. Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4, as discussed in RFC 2071 and RFC 2072. With IPv6, however, changing the prefix in a few routers can renumber an entire network ad hoc, because the host identifiers (the least-significant 64 bits of an address) are decoupled from the subnet identifiers and the network provider's routing prefix. The size of each subnet in IPv6 is 264 addresses (64 bits); the square of the size of the entire IPv4 Internet. Thus, actual address space utilization rates will likely be small in IPv6, but network management and routing will be more efficient.

Motivation for IPv6

The first publicly-used version of the Internet Protocol, Version 4 (IPv4), provides an addressing capability of about 4 billion addresses (232). This was deemed sufficient in the design stages of the early Internet when the explosive growth and worldwide distribution of networks were not anticipated.
During the first decade of operation of the TCP/IP-based Internet, by the late 1980s, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the introduction of classless network redesign, it was clear that this was not enough to prevent IPv4 address exhaustion and that further changes to the Internet infrastructure were needed.By the beginning of 1992, several proposed systems were being circulated, and by the end of 1992, the IETF announced a call for white papers (RFC 1550) and the creation of the "IP Next Generation" (IPng) area of working groups.
The Internet Engineering Task Force adopted IPng on July 25, 1994, with the formation of several IPng working groups. By 1996, a series of RFCs were released defining Internet Protocol Version 6 (IPv6), starting with RFC 2460.
Incidentally, the IPng architects could not use version number 5 as a successor to IPv4, because it had been assigned to an experimental flow-oriented streaming protocol (Internet Stream Protocol), similar to IPv4, intended to support video and audio.
It is widely expected that IPv4 will be supported alongside IPv6 for the foreseeable future. IPv4-only nodes are not able to communicate directly with IPv6 nodes, and will need assistance from an intermediary.

Features and differences from IPv4

To a great extent, IPv6 is a conservative extension of IPv4. Most transport- and application-layer protocols need little or no change to work over IPv6; exceptions are applications protocols that embed network-layer addresses (such as FTP or NTPv3).
IPv6 specifies a new packet format, designed to minimize packet-header processing. Since the headers of IPv4 and IPv6 are significantly different, the two protocols are not interoperable.

Larger address space
IPv6 features a larger address space than that of IPv4: addresses in IPv6 are 128 bits long versus 32 bits in IPv4.
Address scopes
IPv6 introduces the concept of address scopes. An address scope defines the "region" or "span" where an address can be defined as a unique identifier of an interface. These spans are the local link, the site network, and the global network, corresponding to link-local, site-local or unique local unicast, and global addresses, as defined in RFC 3513 and RFC 4193.
Interfaces configured for IPv6 almost always have more than one address, usually one for the local link (the link-local address), and additional ones for site-local or global addressing. Link-local addresses are often used in network address autoconfiguration where no external source of network addressing information is available.
In addition to address scopes, IPv6 introduces the concept of "scope zones". Each address can only belong to one zone corresponding to its scope. A "link zone" (link-local zone) consists of all network interfaces connected on one link. Addresses maintain their uniqueness only inside a given scope zone. Zones are indicated by a suffix (zone index) to an address. For example, fe80::211:d800:97:c915%eth0 (link-local address) and fec0:0:0:ffff::1%4 (site-local address) show the additional suffix indicated by the percent (%) character.
Stateless address autoconfiguration
IPv6 hosts can configure themselves automatically when connected to a routed IPv6 network using ICMPv6 router discovery messages. When first connected to a network, a host sends a link-local multicast router solicitation request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters.
If IPv6 stateless address autoconfiguration (SLAAC) proves unsuitable, a host can use stateful configuration (DHCPv6) or be configured manually. In particular, stateless autoconfiguration is not used by routers, these must be configured manually or by other means.
Multicast
Multicast, the ability to send a single packet to multiple destinations, is part of the base specification in IPv6. This is unlike IPv4, where it is optional (but usually implemented).
IPv6 does not implement broadcast, the ability to send a packet to all hosts on the attached link. The same effect can be achieved by sending a packet to the link-local all hosts multicast group.
Most environments, however, do not currently have their network infrastructures configured to route multicast packets; multicasting on single subnet will work, but global multicasting might not.
Mandatory network layer security
Internet Protocol Security (IPsec), the protocol for IP encryption and authentication, forms an integral part of the base protocol suite in IPv6. IP packet header support is mandatory in IPv6; this is unlike IPv4, where it is optional (but usually implemented). IPsec, however, is not widely used at present except for securing traffic between IPv6 Border Gateway Protocol routers.
Simplified processing by routers
The format of the IPv6 packet header aims to minimize header processing at intermediate routers. Although the addresses in IPv6 are four times larger, the default headers are only twice the size of the default IPv4 header.

source : Wikipedia

Wednesday, October 8, 2008

POP3 Protocol

The Post Office Protocol version 3 (POP3), is an application-layer Internet standard protocol, to retrieve e-mail from a remote server over a TCP/IP connection. POP3 and IMAP4 (Internet Message Access Protocol) are the two most prevalent Internet standard protocols for e-mail retrieval. Virtually all modern e-mail clients and servers support both

Overview

The design of POP3 and its procedures supports end-users with intermittent connections (such as dial-up connections), allowing these users to retrieve e-mail when connected and then to view and manipulate the retrieved messages without needing to stay connected. Although most clients have an option to leave mail on server, e-mail clients using POP3 generally connect, retrieve all messages, store them on the user's PC as new messages, delete them from the server, and then disconnect. In contrast, the newer, more capable Internet Message Access Protocol (IMAP) supports both connected (online) and disconnected (offline) modes of operation. E-mail clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other aspects of IMAP operation allow multiple clients to access the same mailbox. Most e-mail clients support either POP3 or IMAP to retrieve messages; however, fewer Internet Service Providers (ISPs) support IMAP. The fundamental difference between POP3 and IMAP4 is that POP3 offers access to a mail drop; the mail exists on the server until it is collected by the client. Even if the client leaves some or all messages on the server, the client's message store is considered authoritative. In contrast, IMAP4 offers access to the mail store; the client may store local copies of the messages, but these are considered to be a temporary cache; the server's store is authoritative.
Clients with a leave mail on server option generally use the POP3 UIDL (Unique IDentification Listing) command. Most POP3 commands identify specific messages by their ordinal number on the mail server. This creates a problem for a client intending to leave messages on the server, since these message numbers may change from one connection to the server to another. For example if a mailbox contains five messages at last connect, and a different client then deletes message #3, the next connecting user will find the last two messages' numbers decremented by one. UIDL provides a mechanism to avoid these numbering issues. The server assigns a string of characters as a permanent and unique ID for the message. When a POP3-compatible e-mail client connects to the server, it can use the UIDL command to get the current mapping from these message IDs to the ordinal message numbers. The client can then use this mapping to determine which messages it has yet to download, which saves time when downloading. IMAP has a similar mechanism, a 32-bit unique identifier (UID) that must be assigned to messages in ascending (although not necessarily consecutive) order as they are received. Because IMAP UIDs are assigned in this manner, to retrieve new messages an IMAP client need only request the UIDs greater than the highest UID among all previously-retrieved messages, whereas a POP client must fetch the entire UIDL map. For large mailboxes, this difference can be significant.
Whether using POP3 or IMAP to retrieve messages, e-mail clients typically use the SMTP_Submit profile of the Simple Mail Transfer Protocol (SMTP) to send messages. E-mail clients are commonly categorized as either POP or IMAP clients, but in both cases the clients also use SMTP. There are extensions to POP3 that allow some clients to transmit outbound mail via POP3 - these are known as "XTND XMIT" extensions.
MIME serves as the standard for attachments and non-ASCII text in e-mail. Although neither POP3 nor SMTP require MIME-formatted e-mail, essentially all Internet e-mail comes MIME-formatted, so POP clients must also understand and use MIME. IMAP, by design, assumes MIME-formatted e-mail.
Like many other older Internet protocols, POP3 originally supported only an unencrypted login mechanism. Although plain text transmission of passwords in POP3 still commonly occurs, POP3 currently supports several authentication methods to provide varying levels of protection against illegitimate access to a user's e-mail. One such method, APOP, uses the MD5 hash function in an attempt to avoid replay attacks and disclosure of the shared secret. Clients implementing APOP include Mozilla Thunderbird, Opera, Eudora, KMail, Novell Evolution, Windows Live Mail, Power Mail, etc.
POP3 works over a TCP/IP connection using TCP on network port 110. E-mail clients can encrypt POP3 traffic using TLS or SSL. A TLS or SSL connection is negotiated using the STLS command. Some clients and servers, like Google Gmail, instead use the deprecated alternate-port method, which uses TCP port 995

source: wikipedia

Tuesday, October 7, 2008

Pull Technology

Pull technology or client pull is a style of network communication where the initial request for data originates from the client, and then is responded to by the server. The reverse is known as push technology, where the server pushes data to clients.
Pull requests form the foundation of network computing, where many clients request data from centralized servers. Pull is used extensively on the Internet for HTTP page requests from websites.
A push can also be simulated using multiple pulls within a short amount of time. For example, when pulling POP3 email messages from a server, a client can make regular pull requests every few minutes. To the user, the email then appears to be pushed, as emails appear to arrive close to real-time. The tradeoff is this places a heavier load on both the server and network in order to function correctly.
Most web feeds, such as RSS are technically pulled by the client. With RSS, the user's RSS reader polls the server periodically for new content; the server does not send information to the client unrequested. This continual polling is inefficient and has contributed to the shutdown or reduction of several popular RSS feeds that could not handle the bandwidth.


source : wikipedia

Push Technology

Push technology, or server push, is a style of Internet-based communication where the request for a given transaction originates with the publisher or the server. It is in contrast with pull technology, where the request for the transmission of information originates with the receiver or the client.

General use
Push services are often based on information preferences expressed in advance. This is known as a publish/subscribe model. A client might "subscribe" to various information "channels". Whenever new content is available on one of those channels, the server would push that information out to the user.
Synchronous conferencing and instant messaging are typical examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs and centralized programs allow pushing files, this means the sender initiates the data transfer rather than the recipient.
Email is also a push system: the SMTP protocol on which it is based is a push protocol. However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The original BlackBerry was the first popular example of push technology in a wireless context.
Another popular type of Internet push technology was PointCast Network, which gained popularity in the 1990s. It delivered news and stock market data. Other uses are push enabled web applications including market data distribution (stock tickers), online chat/messaging systems (web chat), auctions, online betting and gaming, sport results, monitoring consoles and sensor network monitoring.

source : wikipedia

Wednesday, September 24, 2008

Web Security Basics

There are several key Concern related to web security.how secure are the systems that control the exchange of information on the web ? how secure is the information stored on the numerous computer across the web?. it is known fact the what can be used can also be misused. we should
always remember that if organizational information is hacked either through the network or through other the other means ,it could incur heavy cost to the compay . A failure in the network
security could also cost the organization in terms of goodwill and reputation.No other organization would be interested in doing busiiness with that organization .that can't protect its information and security system.
Following points need to be keep track:
1>Login Page Should Be Encrypted :
The number of times I have seen Web sites that only use SSL (with https: URL schemes) after user authentication is accomplished is really dismaying. Encrypting the session after login may be useful — like locking the barn door so the horses don’t get out — but failing to encrypt logins is a bit like leaving the key in the lock when you’re done locking the barn door. Even if your login form POSTs to an encrypted resource, in many cases this can be circumvented by a malicious security cracker who crafts his own login form to access the same resource and give him access to sensitive data.
2>Data Validation Should Be Done Server Side:Many Web forms include some JavaScript data validation. If this validation includes anything meant to provide improved security, that validation means almost nothing. A malicious security cracker can craft a form of his own that accesses the resource at the other end of the Web page’s form action that doesn’t include any validation at all. Worse yet, many cases of JavaScript form validation can be circumvented simply by deactivating JavaScript in the browser or using a Web browser that doesn’t support JavaScript at all. In some cases, I’ve even seen login pages where the password validation is done client-side — which either exposes the passwords to the end user via the ability to view page source or, at best, allows the end user to alter the form so that it always reports successful validation. Don’t let your Web site security be a victim of client-side data validation. Server-side validation does not fall prey to the shortcomings of client-side validation because a malicious security cracker must already have gained access to the server to be able to compromise it.er-side:
3>Manage your Web site via encrypted connections:
Using unencrypted connections (or even connections using only weak encryption), such as unencrypted FTP or HTTP for Web site or Web server management, opens you up to man-in-the-middle attacks and login/password sniffing. Always use encrypted protocols such as SSH to access secure resources, using verifiably secure tools such as OpenSSH. Once someone has intercepted your login and password information, that person can do anything you could have done.
4>Use strong, cross-platform compatible encryption:
Believe it or not, SSL is not the top-of-the-line technology for Web site encryption any longer. Look into TLS, which stands for Transport Layer Security — the successor to Secure Socket Layer encryption. Make sure any encryption solution you choose doesn’t unnecessarily limit your user base, the way proprietary platform-specific technologies might, as this can lead to resistance to use of secure encryption for Web site access. The same principles also apply to back-end management, where cross-platform-compatible strong encryption such as SSH is usually preferable to platform-specific, weaker encryption tools such as Windows Remote Desktop.
5>Connect from a secured network:
Avoid connecting from networks with unknown or uncertain security characteristics or from those with known poor security such as open wireless access points in coffee shops. This is especially important whenever you must log in to the server or Web site for administrative purposes or otherwise access secure resources. If you must access the Web site or Web server when connected to an unsecured network, use a secure proxy so that your connection to the secure resource comes from a proxy on a secured network. In previous articles, I have addressed how to set up a quick and easy secure proxy using either an OpenSSH secure proxy or a PuTTY secure proxy.

6>Prefer key-based authentication over password authentication:Password authentication is more easily cracked than cryptographic key-based authentication. The purpose of a password is to make it easier to remember the login credentials needed to access a secure resource — but if you use key-based authentication and only copy the key to predefined, authorized systems (or better yet, to separate media kept apart from the authorized system until it’s needed), you will use a stronger authentication credential that’s more difficult to crack.

7>Maintain a secure workstation:
If you connect to a secure resource from a client system that you can’t guarantee with complete confidence is secure, you cannot guarantee someone isn’t “listening in” on everything you’re doing. Keyloggers, compromised network encryption clients, and other tricks of the malicious security cracker’s trade can all allow someone unauthorized access to sensitive data regardless of all the secured networks, encrypted communications, and other networking protections you employ. Integrity auditing may be the only way to be sure, with any certainty, that your workstation has not been compromised.

In order for anyone to attack your web site, there has to be a way in - an unguarded doorway into your server.
There are only three places where these doorways exist:
1. On your local computer that you use to upload your web pages.
2. Through any form field that you use on your web site to collect information from your visitors.
3. At the physical location where your server is located.

With virus and worm attacks there also has to be a doorway in, but the doorway exists primarily in the software your web site visitors use to view your site (Microsoft Internet Explorer), not in the site itself.
You can guard against the virus replicating itself on your mail server or the worm being downloaded to your local network or computer by using firewall software, but responding to virus and worm attacks consists mainly of educating yourself and your co-workers to prevent more infections and waiting for Microsoft to patch their software.

On the other hand, if you're running your own server, you need to stay on top of potential new security threats from viruses and worms daily to close any potential doorways into your server (covered in server security). The good news is that if you're just programming a web site (the majority of police webmasters), there are many straight forward and relatively simple methods you can use to protect your site and close the doorways in.
Source :http://blogs.techrepublic.com.com/security/?p=424

Thursday, September 11, 2008

How Important are backlinks?

First let me introduce to concept of back links

Backlinks are links that are directed towards a webpage. Popularly called Inbound links. The number of backlinks to a webpage act as vote for the webpage. It also adds to the popularity of the webpage.

Google's first published algorithm is based on Page rank. According to the Page rank algorithm inbound links are counted as vote for the page and outbound as links as votes which the webpage provides to other pages. Since back links are immense part of page rank algorithm they are important from a Search Engine optimizer point of view.

Backlinks need to be quality backlinks for better ranking of web page

According to Search Engine a backlink is a quality backlink if it is relvant for the keyword for which it has been created. further theme of the voting website is similar to the theme of the voted website. Thus we cannot be satisfied with merely getting inbound links we need to keep a check on the quality of the inbound link that matters. Thus a inbound link becomes more relevant if the inbound links to the website come from sites that have content related to the site. If inbound links are found on sites with unrelated content, they are considered less relevant. The higher the relevance of inbound links, the greater their quality.


Quality Backlinks can only be created with time. While it is fairly easy to manipulate links on a web page to try to achieve a higher ranking it is very difficult to influence a search engine with backlinks from other websites. This is also a reason why backlinks factor in so highly into a search engine's algorithm. Lately, however, a search engine's criteria for quality inbound links has gotten even tougher, thanks to unscrupulous webmasters trying to achieve these inbound links by deceptive or sneaky techniques, such as with hidden links, or automatically generated pages whose sole purpose is to provide inbound links to websites. These pages are called link farms, and they are not only disregarded by search engines, but linking to a link farm could get your site banned entirely.

Tips for Quality Backlinks

1. reciprocal linking:

Many times webmasters agree to reciprocal link exchanges in order to boost website rankings. It is a kind of link exchange where a webmaster places a link on his website that points to another webmasters website, and vice versa. It is possible that such kind of links are not relevant. Major search engines like google strongly oppose such kind of irrelevant linking and do update their algorithm for filtering them.


2. keep track of backlinks:
While building your backlink building campaign it is important that one keeps track of backlinks and how the anchor text of the backlink incorporates keywords relating to your site.

Recommended resource