Residential IPs Proxy

Proxy servers have of course, been around for a long time.  Over 14 years ago I spent a whole Summer, installing Microsoft ISA server in a variety of businesses as slowly the corporate world decided that having internet access was worth the risk.  It sounds incredible nowadays that the issue would ever arise, after all what did we all do in our lunch breaks.  There are lots of different types of proxies and you’ll likely be using one in work or college if you have any web access.

Residential IPs Proxy

The proxy in a corporate network is usually there to act as a central gateway to internet access.  Not only does it apply some control but it’s easier to protect a single connected device from internet baddies than thousands of directly connected clients.   Nowadays though proxies have different roles as well, millions of people use them to provide anonymity and to bypass the geo-blocks that exist all over the web.

The privacy side is fairly straight forward,  if you route your web request through a proxy there’s no record of your address on the web server itself.   In order to bypass the various geo-blocks, all you usually need is a proxy server in the same country as the resource you’re trying to access.  The concept is actually quite straight forward, to watch something on the BBC iPlayer for example you need to have a UK IP address.  Now normally you’d have to be in the UK to have one of these, but if you route your internet connection through a proxy server the web site will see the address of the proxy not your real one.  So as long as the proxy is in the UK then you’ll get complete access.

Considering the BBC alone has about ten high quality free to air channels then you can see why people pay a few pounds to receive this sort of service.   In fact most people subscribe to one of the VPN services which are more secure than proxies and are usually a little faster.

So What about Residential IPs Proxy

Now for watching a TV show online, or accessing a blocked YouTube video a single connection to a single server is normally sufficient.  You don’t need a vast list of addresses available to you, and as long as the VPN servers are not overloaded then you should be fine.

However many people require much more than this primarily for using a variety of automated software.  There are computer programs which do online research, post multiple adverts or social media posts, buy goods for resale and so on.  People generally use these to make money but there’s an issue – all them simulate multiple users and so require multiple connections to run properly.

Take for example a program called a Sneaker bot, these are programs which attempt to buy multiple pairs of limited availability sneakers.  These are very difficult to obtain online so these programs run multiple attempts over and over again until they obtain a pair and then recommence.  However if you do this from a single internet connection, the website will detect the bot and ban the address instantly.  So in order to run properly it needs a selection of IP addresses which it can rotate through to look natural – you’ll sometimes see them referred to as sneaker proxies.

There is another slight complication in that, many web sites now block any IP address that doesn’t come from a home user – i.e. they block commercially classified IP addresses.   So these addresses need to be classified as residential rather than commercial too. These are much less likely to get blocked hence most of these programs need multiple proxies with a residential IP addresses to work properly.

The main issue is that residential IP addresses are much harder to obtain than commercial ones.  They are normally only assigned by ISPs (internet service providers) to their home customers and even then only individually.  So as you can imagine obtaining large numbers of these is actually very difficult and it’s there equally hard to buy residential ips too.   It is likely over the next few years they will become increasingly valuable as more and more websites block access based on the IP classification.

If you’re looking for a decent source of residential proxies then there are a few companies who offer the service. Be careful of one option called the Illuminati Network as these are actually addresses from home users who have installed a free VPN program on their computers.  Most people are completely unware that they are having other people relayed through their internet connections as the details are explained in the fine print of their agreements (which few read).

One of the oldest established companies who offer residential IPs on their own dedicated hardware is a company called Storm proxies, you can find their site on the link below.  They offer a wide range of different options including dedicated rotating proxies and residential backconnect proxies which allow access to thousands of different addresses.

Try the 48 Hour Trial of Storm Proxies 

Link to Storm Proxies

 

Introducing ARP – Address Resolution Protocol

One of the most important lower layer protocols is known as ARP – the address resolution protocol. It’s an important protocol and one you’ll need some knowledge of in troubleshooting all sorts of network issues. From identifying latency problems to application issues affecting the network – it’s a useful to have some knowledge. It’s an essential part of learning to understand your network at the packet level and being able to spot abnormal traffic.

The issue you can have with troubleshooting any network is identifying what’s causing the problem and what devices are involved. For example if you’re investigating the network of a residential IP provider then you can focus on particular protocols and specific areas of the network. Invariably central proxies can be difficult to troubleshoot as most will carry (if not understand) all sorts of traffic and protocols. In addition the servers will be creating a communication channel between completely different devices and even networks.

Both logical and physical addresses are used for communication on a network. The use of logical addresses permits communication among multiple networks and indirectly connected devices. The use of physical addresses assists in communication on a singular network segment for devices that are directly linked to each other with a switch. These two types of addressing must work together in order for communication to occur.

Consider a situation where you want to interact with a device on your network. This device might be a server of some kind or simply one more work- station you need to share files with. The application you are actually using to initiate the communication is actually aware of the Internet Protocol address of the remote host (by means of DNS, addressed elsewhere), meaning the system ought to have all it needs to build the layer 3 through 7 information of the packet it wants to transmit.

The sole component of info it requires at this point is the layer 2 data link data consisting of the MAC address of the intended host. MAC addresses are actually required for the reason that a switch that interconnects devices on a network uses a Content Addressable Memory (CAM) table, which provides the MAC addresses of all of the devices connected into each one of its ports. When the switch receives traffic destined for a particular MAC address, it uses this table to know through which port to deliver the traffic.
If the destination MAC address is unidentified, the broadcasting device will first check for the address in its cache; in the event that it is not actually there, then this should be resolved by means of additional communicating on the network.

The resolution technique that TCP/IP networking (with IPv4) uses to resolve an IP address to a MAC address is called the Address Resolution Protocol (ARP), which is defined in RFC 826. The ARP resolution process uses only two packets: an ARP request and an ARP response.

Protecting Your Network from DoS Attacks

Most network administrators who run web facing servers will spend a lot of their time defending, protecting and patching against network attacks. They can be extremely time consuming to combat and some of the worst to deal with are called denial of service attacks. Although these are usually relatively primitive attacks the problem is that they are easy to orchestrate and very difficult to trace back to the originator. One of the biggest problems is that the attacker rarely needs a valid connection to it’s victim which makes finding the source very difficult indeed.

A Denial of Service (DOS) attack is actually any type of attack which disrupts the operation of a computer in order that genuine individuals can no longer gain access to it. DoS attacks are achievable on most network equipment, including switches, hosting servers, firewalls, remote access computers, as well as just about every other network resource.A DoS attack can be specific to a service, for example, in an FTP attack, or an entire machine.The forms of DoS are varied and wide ranging, but they can be split into 2 distinct classifications that connect to intrusion detection: resource depletion and malicious packet attacks.

Malicious packet DoS attacks work by sending out abnormal traffic to a host in order to bring about the service or the host itself to crash. Crafted packet DoS attacks take place when computer software is not correctly coded to take care of uncommon or unusual traffic. Often out-of– specification traffic can cause computer software to behave unexpectedly and crash. Attackers can utilize crafted packet DoS attacks in order to bring down IDSs, even Snort.A specifically crafted tiny ICMP packet using a size of 1 was discovered to cause Snort v. 1.8.3 to core dump. This particular version of Snort did not actually properly define the minimum ICMP header size, which made it possible for the DoS to happen.

These attacks will commonly use hijacked computers to launch from, it’s relatively easy to build up a large network of compromised computers and there are also networks available for hire. These computers can obviously be traced but the owners are usually unaware of the role their servers or PC have undertaken. Additionally skilled attackers will use a network of proxies and VPNs hidden behind residential IP address providers or VPNs such as described in this post.

Along with out of spec traffic, malicious packets can contain payloads which cause a system to crash. A packet’s payload is actually taken as input right into a service. If the input is not actually appropriately assessed, the application can be DoSed. The Microsoft FTP DoS attack demonstrates the wide variety of DoS attacks easily available to black hats in the wild.The initial step in the attack is actually to trigger a genuine FTP connection.The attacker would most likely then issue a command together with a wildcard sequence (such as * or?). Within the FTP Server, a feature that handles wildcard routines in FTP commands does not assign sufficient memory when executing pattern matching. It is actually possible for the attackers command incorporating a wildcard pattern to cause the FTP service to crash.This DoS, as well as the Snort ICMP DoS, are two illustrations of the many thousands of conceivable DoS attacks easily available.

The additional method to deny service is via resource depletion.A resource depletion DOS attack functions by saturating a service with a great deal of normal traffic that legitimate individuals can not actually access the service. An attacker flooding a service with regular traffic can easily expend finite resources such as bandwidth, memory, and processer cycles.A classic memory resource exhaustion DoS is a SYN flood.A SYN flood takes advantage of the TCP three-way handshake.The handshake starts with the client sending a TCP SYN packet. The host then sends out a SYN ACK in response.The handshake is completed when the client responds with an ACK. If the host does not obtain the returned ACK, the host sits idle and waits with the session open. Each open session consumes a certain amount of memory. In the event that enough three– way handshakes are started, the host consumes all available memory waiting for ACKs.The traffic generated from a SYN flood is normal in appearance. The majority servers are configured today to leave just a certain number of TCP connections open. A different classic resource exhaustion attack is the Smurf attack.

A Smurf attack Works by making the most of open network broadcast addresses.A broadcast address forwards all packets on to every host on the destination subnet. Every host on the destination subnet responds to the source address specified in the traffic to the broadcast address. An attacker transmits a stream of ICMP echo requests or pings to a broadcast address.This has the effect of amplifying a single ICMP echo request up to 250 times. In addition. the attacker spoofs the source address in order that the target receives all the ICMP echo reply traffic. An attacker with a 128 Kb/s DSL Net connection can certainly produce a 32 Mb/s Smurf flood. DoS attacks commonly take advantage of spoofed IP addresses because the attack succeeds even if the answer is misdirected.The attacker requires no reply, and in cases like the Smurf attack, wants at any costs to stay away from a response.This can certainly help make DoS attacks difficult to defend from, and even tougher to trace.

Further Reference: http://bbciplayerabroad.co.uk/how-do-i-get-bbc-iplayer-in-france/

Proxy – Access Control Methods

When you think initially about access control to a standard proxy one of the most obvious options is tradtional user name and password. Indeed access control by user authentication is one of the most popular methods if only because it’s generally one of the simplest to implement. Not only does it use readily available information for authentication it will also fit neatly in with most corporate networks which generally run on a Windows or Linux platforms. All common OS’s support user authentication as standard and normally using a variety of protocols.

Access control based on the username and group is a commonly deployed feature of proxies. It requires users to authenticate themselves to the proxy server before allowing the request to pass. This way, the proxy can associ- ate a user identity with the request and apply different restrictions based on the user. The proxy will also log the username in its access log, allowing logs to be analyzed for user-specific statistics, such as how much bandwidth was consumed by each user. This can be vital in the world of high traffic multimedia applications and a few users using your Remote access server as a handy BBC VPN service can bring a network to it’s knees.

Authentication There are several methods of authentication. With HTTP, We/9 servers support the Basic authentication, and sometimes also the Digest authentication (see HTTP Authentication on page 54). With HTTPS—— or rather, with any SSL-enhanced protocol—certificate-based authentication is also possible. However, current proxy servers and clients do not yet support HTTPS communication to proxies and are therefore unable to perform certificate-based authentication.

This shortcoming will surely be resolved soon. Groups Most proxy servers provide a feature for grouping a set of users under a single group name. This allows easy administration of large numbers of users by allowing logical groups such as admin, engineering, marketing, sales, and so on. It will also be useful in multinational organisations where individuals may need to authenticate in different countries and using global user accounts and groups. So if a UK based salesman was travelling in continental Europe he could use his UK account to access a French proxy and use local resources.

ACCESS CONTROL BY CLIENT HOST ADDRESS An almost always used access control feature is limiting requests based on the source host address. This restriction may be applied by the IP address of the incoming request, or the name of the requesting host. IP address restrictions can often be specified with wildcards as entire network sub- nets, such as 112.113.123 . * Similarly, wildcards can be used to specify entire domains: * . yoruwebsite.com

Access control based on the requesting host address should always be performed to limit the source of requests to the intended user base.

So what’s a Digital Certificate?

We’ve probably all seen those simple diagrams where an electronic signature authenticates the key pair used to create the signature. For electronic commerce, authenticating a key pair might not be adequate. For business transactions, each key pair needs to be closely bound to the consumer that owns the key pair. An electronic certificate is a credential that contrasts an integral pair to the entity that owns the key pair. Digital certificates are issued by certification authorities,
therefore we trust the binding prescribed with the certificate.

A Digital signature is fine for verifying e-mail, but stronger verification methods are needed to associate an individual, like the demonstration in our earlier post where we used it to allow access to an app for watching the BBC News abroad. to the binary bits on the network that are purporting to “belong” to Tom Smith. For electronic commerce to work, the association has to be of a power that is legally binding. When Tom Smith has an electronic certificate to advertise to the planet at large, he is in possession of something which might take more trust than the “seal” that was made by his own digital signature.

You might trust his digital signature, but what if a few other believed authority had trust in Tom Smith?
Wouldn’t you then trust Tom Smith a little more? A digital certificate is given by an organization that has a reputation to defend. This organization, known as the certificate authority (CA), may be Tom’s employers, an independent organization, or the government. The CA will take measures to set some truths about Torn Smith before issuing a certificate because of him.

The certificate will normally hold Tom’s name, his public key number, the serial number of the certificate itself, and validity dates (issue and expiry). It’ll also bear the name of the issuing CA. The whole certificate is digitally signed by the CA’s own private key.

Lastly we’ve achieved a mechanism which may be used to allow individuals who’ve no previous relationship to set each other’s identity and participate in the legal transactions of electronic commerce. It’s certainly more efficient and secure than using something like geo-location which simply determines your identity based on your location. So for example, a web site might determine nationality by using your network address – e.g a British IP address needed to access the BBC online.

Certificates, if delivered correctly, inspire trust among Internet traders. It’s not, however, as easy as it might sound.
Certificates expire, are missing, are issued to the wrong person, or have to be revoked because the detail held on the certificate is wrong–maybe the people key number was threatened–and this leads to a large Certificate Control effort or even a campaign.

The X.509 v3 certificate format is a standard used for public important certificates and is broadly used by Internet security protocols (like SHTTP). Based on X.509 v3, digital certificates are being used increasingly as electronic credentials for identification, non- repudiation, and even authorization, when making payments and conducting other business transactions on the Internet or corporate Intranets.

Just as within our credit card system of today, where millions of credit card numbers issued by any bank in the world are electronically confirmed, so it will be the use of digital certificates will demand a clearing house network for certificate confirmation of a comparable scale.

Single proxies or proxy arrays?

If you’re working in a small business or network then this issue will probably never arise. However with the growth of the internet and web enabled devices and clients it’s an issue that will almost certainly effect most network administrators. Do we just keep adding an extra proxy to expand capability and bandwidth or should you install an array.

Nevertheless the solution can be dependent on a variety of external factors. for example in the event the corporation is concentrated in a single location, just one level of proxies is a better solution. This reduces the latency as there’s only a single additional hop added by proxies, as opposed to two or more with tree structured proxy hierarchies.

Although the general rule would be to have one proxy server for every 5000 (possible, not simultaneous) users, it doesn’t automatically mean that a company with 9000 users should have 3 departmental proxies, that are then chained to some most important proxy.

Instead, the 3 proxies might be installed in parallel, using Cache Array Routing Protocol (CARP) or another hashbased proxy selection mechanism. Larger corporations with in-house programming skills may have resource to create custom solutions too which work better to a specific environment which perhaps incorporates remote VPN access to the network too. For example many larger environments have different levels of security in place and have various zones which need to be isolated, generic ‘serve all’ proxies can be a significant security issue in these environments.

This approach can also combine multiple physical proxy caches into a single logical one. Ln general, such clustering of all proxies is recommended as it increases the effective cache size and eliminates redundancy between individual proxy caches. Three proxies, each with a 4 gigabyte cache, would give an efficient 12 gigabytes of cache when put up in parallel,as opposed to only about 4GB if used individually.

Generally, some quantity of parallelization of proxies into arrays is obviously desired. Nevertheless, the network layout might dictate that depart psychological proxies be utilized. That is, it is not feasible to have all of the trafc originating from the entire company go through one array of proxies. It can cause the entire array to become a 1/ O bottleneck, even when the individual proxies of the variety have been in individual subnets. The load created by the users can be so high that the subnets leading to the proxies may choke. To alleviate this, some departmental proxies need to be deployed closer to the end customers, in order that a number of the traffic created by the users will not reach the main proxy array.

Failover? Since proxies are a centralized point of traffic it’s vitally important that there is a system in place for failover. If a proxy goes down, users will instantly lose their access to the internet. What’s more it may be that many important applications rely on permanent internet access to keep running. They might need access to central database systems or perhaps need frequent updates or security patches. In any ways, internet access is often much more crucial than simply the admin office being able to use Amazon, surf UK TV abroad or check the TV schedules online.

Failover might be achieved in many various ways. There are (relatively expensive) hardware solutions which transparently change to a hot standby system in the event the primary system goes down.  You can usually choose between configuration and restore scenarios, there’s the choice to invest in residential ips proxies too.

Nevertheless, proxy autoconfiguration and CARP provide more cost effective failover support. During the time of this writing, there are a couple areas in customer failover sup port which might be improved. Users have a Propensity to detect a intermediate proxy server going down by seeing fairly long delays, and possibly error messages. A proper proxy back up system should be virtually seamless and provide similar levels of speed and bandwidth than the primary system.

Code Signing – How it Works

How do you think that users and computers can trust all this random software which appears on large public networks?  I am of course referring to the internet and the requirement most of us have to download and run software or apps on a routine basis.  How can we trust that this is legitimate software and not some shell of a program just designed to infect our PC or steal our data?  After all even if we avoid most software, everyone needs to install driver updates and security patches.

The solution generally involves something called code signing which allows companies to assure the quality and content of any file released over the internet.    The software is signed by a certificate and as long as you trust the certificate and it’s issuer then you should  be happy to install the associated software.    Code signing is used by most major distributors in order to ensure the quality of released software online.

Code Signing – the Basics
Coed signing simply adds a small digital signature to a program, an executable file, an active X control, DLL (dynamic link library) or even a simple script or java applet. The crucial fact is that this signature seeks to protect the user of this software in two ways:

Digital signature identified the publisher, ensuring you know exactly who wrote the program before you install it.

Digital signature allows you to determine whether the code you are looking to install is the same as that was released. It also helps to identify what if any changes have been made subsequently.

Obviously if the application is aware of code signing this makes it even simpler to use and more secure. These programs can be configured to interact with signed/unsigned software depending on particular circumstances. One simple example of this are the security zones defined in Internet Explorer. They can be configured to control how each application interacts depending on what zone they are in. There can be different rules for ‘signed’ and ‘unsigned’ applications for instance with obviously more rights assigned to the ‘signed’ applications.

In secure environments you can assume that any ‘unsigned’ application is potentially dangerous and apply restrictions accordingly. Most web browsers have the ability to determine the difference between these applications and assign security rights depending on the status. It should be noted that these will be applied through any sort of connection or access, even a connection from a live VPN to watch the BBC!

This is not restricted to applications that operate through a browser, you can assign and control activity of signed and unsigned applications in other areas too.  Take for instance device drivers, it is arguably even more important that these are validated before being installed.  You can define specific GPO settings in a windows environment to control the operation and the installation of a device driver based on this criteria.  These can also be filtered under a few conditions, for example specifying the proxy relating to residential ip addresses.

As well as installation it can control how Windows interacts with these drivers too,  although generally for most networks you should not allow installation of an unsigned driver.  This is not always possible though, sometimes application or specialised hardware will need device drivers where the company hasn’t been able to sign the code satisfactorily.   In these instance you should consider carefully before installing and consider the source too. For example if you have downloaded from a reputable site using a high anonymous proxies to  protect your identity then that might be safer than a random download from an insecure site, there is still a risk though.

What Is VPN?

The remote server would access the request, then authenticate through something like a username and password. The tunnel would be established and used to transfer data between the client and server.

If you want to emulate a point to point link, the data must be wrapped with a header – this is normally called encapsulation. This header should provide essential routing information which enables the data to traverse the public network and reach it\’s intended endpoint. In order to keep the link private on this open network all the data would normally be encrypted. Without this route information the data would never reach it\’s intended destination. The encryption ensures that all data is kept confidential. Packets that are intercepted on the shared or public network are indecipherable without the encryption keys. The link in which the private data is encapsulated and encrypted is known as a VPN connection.

One of the most important uses of remote access VPN connections is that it allows workers to connect back to their office or home by using the shared infrastructure of a public network such as the internet. At the users point, the VPN establishes an invisible connection between the client and the organisation’s servers. There is normally no need to specify and aspects of the shared network as long as it is capable of transporting traffic, the VPN tunnel controls all other aspects.   This does mean it’s very difficult to block these VPN connections as the BBC is discovering.

These connections are also known as router to router connections which are established between two fixed points. They are normally setup between distinct offices or based again using the public network of the internet. The link will operate in a similar way to a dedicated wide area network link, however at a fraction of the costs of a dedicated line. Many companies use these increasingly in order to establish fixed connections without the expense of WAN connections. It should be noted that these VPN connections operate over the data link layer of the OSI model.

One of the problems many network administrators find is that users on networks can set up their own VPN connections.  These can be very difficult to detect and allow a direct tunnels into a corporate network especially as they are often used for trivial issues such as obtaining an IP address for Netflix.  Needless to say having users stream encrypted videos streams to their desktops is not good for network performance or security.

Remember a site to site connection will establish a link between two distinct private networks. The VPN server will ensure that a reliable route is always available between the two VPN endpoints. One of the routers will take the role of the VPN client, by requesting the connection. The second server will authenticate and then reciprocate the request in order for the tunnel to be authenticated at each end. In these site to site connections, the packets which are sent across the routers will typically not be created on the routers but clients connected to these respective devices.

 

Using Reverse Proxies in your Environment

Many IT administrators use proxies extensively in their networks, however the concept or reverse proxying is slightly less common.  So what is a reverse proxy? Well, it refers to the setup where a proxy server like this is run in such a way that it appears to clients just like a normal web server.

Specifically, the client will connect directly to the proxy considering it to be the final destination i.e. the web server itself, they will not be aware that the requests could be relayed further to another server.   It is possible that this will even be an additional proxy server.   These ‘reverse proxy servers’ are also often referred to as gateways although this term can have different meanings too.  To avoid confusion we’ll avoid that description in this article.

In reality the word ‘reverse’ refers to the backward role of the proxy server. In a standard proxy, the server will act as a proxy for the client initially.  Any request by the proxy is made on behalf of the received client request.  This is not the case in the ‘reverse’ scenario because because it acts as a proxy for the web server and not the client.  This distinction can look quite confusing, as in effect the proxy will forward and receive requests to both the client and server however the distinction is important.  You can read RFC 3040 for further information on this branch of internet replication and caching.

A standard proxy is pretty much dedicated to the client’s needs,  all configured clients will forward all their requests for web pages to the proxy server.   In a standard network architecture they will normally sit fairly close to the clients in order to reduce latency and network traffic.   These proxies are also normally run by the organisations themselves although some ISPs will offer the service to larger clients.

In the situation of a reverse proxy, it is representing one or a small number of origin servers.  You cannot normally access random servers through a reverse proxy because it has to be configured to specifically access certain web servers.  Often these servers will need to be highly available and the caching aspect is important,  a large organisation like Netflix would probably have specific IP addresses (read this) pointing at reverse proxies.  The list of servers that are accessible should always be available from the reverse proxy server itself.   A reverse proxy will normally be used by ‘all clients’ to specifically access certain web resources, indeed access may be completely blocked by any other route.

Obviously in this scenario it is usual for the reverse proxy to be both controlled and administered by the owner of the origin web server.  This is because these servers are used for two primary purposes to replicate content across a wide geographic area and two replicate content for load balancing.  In some scenarios it’s also used to add an extra layer of security and authentication to accessing a secure web server too.

 

Do I Need a Residential IP or Datacenter Address

For many people, there is a very strong requirement to mask their true identity and location online.  It might be for privacy reasons, perhaps to keep safe or you simply don’t want anyone to log everything you do online.  There are other reasons, using multiple accounts on websites, IP bans for whatever reason or simple region locking – no you can’t watch Hulu on your holidays in Europe. The solution usually now revolves around hiding your IP address using a VPN or proxy as a minimum.

Residential IP Address

Yet the choice doesn’t end there, proxies are pretty much useless now for privacy and security.  They’re easily detected when you logon and to be honest of very little use anymore.   VPN services are much better, yet even here it’s becoming more complicated to access media sites for example.   The problem is that it’s not the technology that is now the issue but the originating IP address. These are actually classified into two distinct groups – residential and commercial which can both be detected by most websites.

 

residential IP address

A residential IP address is one that appears to come from a domestic account assigned from an ISP. It’s by far the most discrete and secure address to use if you want to keep completely private. Unfortunately these IP addresses are difficult to obtain in any numbers and also tend to be very expensive. Bottom line is the majority of people for whatever reason who are hiding their true IP address do it by using commercial addresses and not residential ones.  It is possible to buy residential IPs though and indeed there are a few companies with residential proxies for sale.

Most security systems can easily detect whether you are using a commercial or residential vpn service provider, how they use that information is a little more unsure. So at the bottom of the pile for security and privacy are the datacentre proxy servers which add no encryption layer and are tagged with commercial IP addresses.

Do I really need a residential VPN IP Address? Well that depends on what you are trying to achieve, for running multiple accounts on things like craigslist and Itunes – residential is best. If you want to try and access the US version of Netflix like this, then you’ll definitely need a residential address. Netflix last year filtered out all commercial addresses which means that very few of the VPNs work anymore, and you can’t watch Netflix at work either.

If you just want to mask your real IP address then a commercial VPN is normally enough. The security is fine and no-one can detect your true location, although they can determine you’re not a home user if they check. People who need to switch IPs for multiple accounts and using dedicated tools will probably be best advised to investigate the residential IP options.

There are a few companies who can supply cheap residential proxies which are relatively easy to access.  However the cost of residential IP addresses can get quite high if you require any number.  The solution to this is to use something called residential backconnect proxies which have the ability to rotate through thousands of residential IP addresses automatically.