Specialist Proxy Roles

A few years ago, there was no real variety in proxy servers.  Mostly they just sat in server rooms, caching and relaying information for corporate and educational networks.  The development of the proxy has run fairly parallel with the expansion of the internet.  In those early days, it was the primary gateway for accessing the web – the only device which was allowed access through the corporate firewall connecting clients with the internet.  However in the last decade this role has expanded and developed. There are now proxies all over the internet performing all sorts of roles and specialist functions.   This article discusses one of those specialist roles – the oddly named sneaker proxies.

Now to 99% of the population, this concept is going to sound a little unusual however it does highlight the importance of proxies today. The term tennis shoe proxies does not refer to some incredibly, stealthy setup of a proxy server more to the function they carry out. Prior to we discuss exactly what they in fact are and their function then we first need a little background.

This is all about the current fashion, and more particularly the current sneakers (maybe referred to as trainers outside the U.S.A). Now in my day, if you desired the trendiest trainers you ‘d wait for their release and pop down to the sports store and purchase them. Naturally life is much more complex nowadays and there’s really a choice of restricted edition sneakers that are quite in demand but extremely challenging to acquire. What occurs is the producer launches a minimal amounts of these and they do so in a really particular way to maintain demand.

  • Producer Releases Limited Edition Sneakers to Merchants
  • Middle Guys usually get them.
  • These are offered online to clients

This sounds easy however sadly, the demand is incredibly high worldwide and the producers only release an extremely small number of the tennis shoes. It’s really a crazy market and it’s exceptionally hard to get even a single pair of these tennis shoes if you play the game by the book. Basically even if you wait for notification and then immediately go to among these sneaker websites you ‘d have to be incredibly lucky to get even single pair. It’s so incredibly challenging to choose these up an entire sub market has been developed with supporting innovation to get them. So here’s exactly what you need and why using sneaker proxies is an important part of this battle.
If you just play the game, it’s pretty not likely you’re going to get any of these rare tennis shoe releases. If you’re desperate for the latest fashion or perhaps simply want to make a couple of bucks selling them on at an earnings then they’re are methods to significantly improve your opportunities of getting many pairs. All these releases are normally offered online from various tennis shoe expert sellers, however just wishing to click and purchase isn’t going to work.
So what do you require? How can you get a couple of and even great deals of these sports shoes? Well preferably there’s three components you have to practically guarantee at least a couple of sets.

A devoted server: now if you’re just after a few pairs for your self, then this action is probably not essential. If you’re in it for company and desire to maximise return it’s a smart investment. Tennis shoe servers are simply committed web servers preferably situated to the datacentres of the business like Nike, Supreme, Footsite and Shopify who provide these tennis shoes. You use these to host the next phase, the Bots and automated software application described below.

Sneaker Bots— there are a lot of these and it’s best to do your research study on what’s working best at any point in time. A few of the Bots work finest with particular websites, but they all work in a comparable method. It’s automated software which can keep getting specified tennis shoes without a human needing to sit there for hours pressing the buy button. You can set up the software application to simulate human behaviour with boundless persistence– requesting these tennis shoes day and night when they’re launched. You can run these bots on your PC or laptop computer with a fast connection although they’re more efficient on dedicated servers.

Sneaker Proxies
Now this is probably the most important, and frequently primarily forgotten step if you’re heading to become a tennis shoe baron. Automated software is excellent for sitting there gradually aiming to fill shopping baskets with the latest tennis shoes nevertheless if you try it they get prohibited pretty quickly. Exactly what takes place is that the retail websites quickly identify these several applications due to the fact that they’re all originating from the same IP address of either your server or your computer. As quickly as it takes place, and it will really quickly, they block the IP address and any demand from there will be overlooked– game over I’m afraid.

The Proxy is the Secret

If you don’t get the proxy phase correct then all the rest will be meaningless expenditure and effort. So what makes a correct sneaker proxy? Well there’s undoubtedly tons of free proxies around on the internet, and totally free is definitely excellent. However it’s pointless utilizing these and indeed exceptionally dangerous.
Free proxies are a combination of misconfigured servers, that is accidentally left open which people jump on and use. The others are hacked or taken control of servers intentionally exposed so identity thieves can utilize them to steal usernames, accounts and passwords. Given that you will need at some time to pay for these tennis shoes utilizing some sort of credit or debit card using free proxies to transmit your financial details is utter madness– do not do it.

Even if you do take place to pick a safe proxy which some dozy network administrator has left open, there’s still little point. They are going to be sluggish which indicates however quick your computer or sneaker server is, your applications will run at a snail’s rate. You’re unlikely to be effective with a sluggish connection and frequently you’ll see the bot timing out. The 2nd concern is that there is an essential part to the proxy which you’ll have to be successful and practically no complimentary proxies will have these– a residential IP address.

Lots of business sites now are aware of individuals using proxies, VPNs and residential IP services to bypass geoblocks or run automated software. They find it difficult to detect these programs but there’s a basic method which obstructs 90% of individuals who try– they prohibit connections from business IP addresses. Residential IP addresses are just allocated to home users from ISPs therefore it’s exceptionally challenging to get great deals of them. Virtually all proxies and VPNs offered to employ are appointed with business IP addresses, these are not effective as tennis shoe proxies at all.
Sneaker proxies are various, they utilize domestic IP addresses which look similar to house users and will be enabled access to virtually all sites. Undoubtedly you still have to beware with several connections but the companies who offer these generally provide something called turning backconnect setups which switch both configurations and IP addresses automatically. These have the ability to simulate turning proxies which is much cheaper than purchasing dedicated domestic proxies which can get very costly.

Testing Phases: Static Analysis

Every programmer thinks his code is perfect, well perhaps that’s not entirely true. What I mean is that no programmer thank you for pointing out obvious flaws in their code if they can help it. However that’s the primary aim of the initial testing phases to spot major and obvious flaws as early as possible. It’s a simple and essential part of the process and arguably one of the most important phases of the test schedule.

Just like reviews, static analysis searches for defects without executing the code. Having said that, as opposed to reviews static analysis is implemented once the code has actually been written. Its goal is to find flaws in software source code and computer software models. Source code is any sequence of statements written in some human-readable computer programming language which in turn can then be converted to equivalent computer executable code– this is actually usually produced by the programmer. A software model is an image of the final solution developed using techniques just like Unified Modeling Language (UML); this is commonly created by a software designer.

Throughout the testing process the core code should be stored somewhere centrally, with limited access to anyone.  If alterations are needed to the core code, it should be done as part of the testing schedule.  Certainly it is vital that these changes are tracked, you should also limit remote access to this store for security reasons.  If remote access is essential then you should use a secure connection such as a VPN  e.g this one for Indian IPTV USA 

Static analysis can easily find issues that are difficult to find during test execution by analysing the program code e.g. instructions to the computer system can be in the form of control flow graphs (how control passes involving modules) and data flows (assuring data is identified and effectively used). The value of static analysis is:

Initial discovery of issues just before test execution. Just like reviews, the earlier the issue is found, the cheaper and easier it is to fix.

Early warning regarding questionable aspects of the code or design, by the computation of metrics, such as a high-complexity measure. If code is too complicated it can be a lot more vulnerable to error or less dependent on the focus given to the code by programmers. In the event that they recognize that the code has to be complex then they are more likely to check and double check that this is accurate; however, if it is unexpectedly complicated there is a greater chance that there will certainly be a defect in it.

Identification of defects not easily discovered by dynamic testing, such as development standard non-compliance as well as identifying dependencies and inconsistencies in software models, such as hyperlinks or interfaces that were actually either inaccurate or unknown before static analysis was carried out.

Improved maintainability of code and design. By performing static analysis, issues will be eliminated that would certainly typically have increased the volume of maintenance required after ‘go live’. It can also recognize complex code which if fixed will make the code more easy to understand and consequently easier to manage.

Prevention of defects. By identifying the defect very early in the life cycle it is actually a great deal easier to identify why it existed in the first place (root cause analysis) than during test execution, therefore offering information on possible process improvement that might be made to prevent the same defect appearing again.

Further Reading: http://residentialip.net/

Proxy – Access Control Methods

When you think initially about access control to a standard proxy one of the most obvious options is tradtional user name and password. Indeed access control by user authentication is one of the most popular methods if only because it’s generally one of the simplest to implement. Not only does it use readily available information for authentication it will also fit neatly in with most corporate networks which generally run on a Windows or Linux platforms. All common OS’s support user authentication as standard and normally using a variety of protocols.

Access control based on the username and group is a commonly deployed feature of proxies. It requires users to authenticate themselves to the proxy server before allowing the request to pass. This way, the proxy can associ- ate a user identity with the request and apply different restrictions based on the user. The proxy will also log the username in its access log, allowing logs to be analyzed for user-specific statistics, such as how much bandwidth was consumed by each user. This can be vital in the world of high traffic multimedia applications and a few users using your Remote access server as a handy BBC VPN service can bring a network to it’s knees.

Authentication There are several methods of authentication. With HTTP, We/9 servers support the Basic authentication, and sometimes also the Digest authentication (see HTTP Authentication on page 54). With HTTPS—— or rather, with any SSL-enhanced protocol—certificate-based authentication is also possible. However, current proxy servers and clients do not yet support HTTPS communication to proxies and are therefore unable to perform certificate-based authentication.

This shortcoming will surely be resolved soon. Groups Most proxy servers provide a feature for grouping a set of users under a single group name. This allows easy administration of large numbers of users by allowing logical groups such as admin, engineering, marketing, sales, and so on. It will also be useful in multinational organisations where individuals may need to authenticate in different countries and using global user accounts and groups. So if a UK based salesman was travelling in continental Europe he could use his UK account to access a French proxy and use local resources.

ACCESS CONTROL BY CLIENT HOST ADDRESS An almost always used access control feature is limiting requests based on the source host address. This restriction may be applied by the IP address of the incoming request, or the name of the requesting host. IP address restrictions can often be specified with wildcards as entire network sub- nets, such as 112.113.123 . * Similarly, wildcards can be used to specify entire domains: * . yoruwebsite.com

Access control based on the requesting host address should always be performed to limit the source of requests to the intended user base.

Planning your Security Assessment

Starting a full security risk assessment in any size of organisation can be extremely daunting if it’s something you’ve never tried before. However before you get too involved in complicated charts, diagrams and long drawn out forms and flowcharts it’s best to take a step back. There’s a simple goal here and that’s to try and assess and address any security risks in your organisation. It’s presumably a subject you will have some opinion and knowledge about so try and focus and don’t turn the exercise into something too complicated with little practical use.

Many people, when questioned as part of a risk assessment will prepare an answer, they will start to look at the nuts and bolts of the system. They’ll give opinions on just how this and that widget is weak, and how someone can get access to them and people documents, and so forth and so on. That’s just a technical evaluation of the system, which might or might not be useful. Whether or not it’s useful will be based on the answer to an essential question. The experienced safety professional will have asked this question before answering the enquirer.  If the system is not being used for it’s intended purpose that’s a completely different issue but it obviously would impact security in certain instance.

For example if company PCs are being used to stream video or route to inappropriate sites to watch ITV Stream abroad whilst at work, this introduces additional risks.  Not only could the integrity of the internal network be affected, the connection will also effect the speed while streaming large amounts of video across the network.  There is no doubt that this behaviour should be flagged if encountered within the assessment although it’s not a primary function of the investigation.

The important question is: What do you mean by secure?  Security is a comparative term. There’s not any absolute scale of unhappiness or level of security. Both conditions, secure and security only make sense when translated as attributes of something you consider precious. Something that’s somehow the risk needs to be secured. How much security does this need? . Well that depends on the value and upon the operational threat. How do you measure the operational threat? . Today you’re getting into the real questions which will lead you to an understanding of what you actually mean by the term secure. Measuring and prioritizing business risk security is utilized to defend things of value.

At a business environment things which have value are usually called assets. If assets are somehow damaged or destroyed, then you may suffer a business impact. The prospective event by which you are able to suffer the harm or destruction is a danger. To prevent threats from crystallising into loss events that have a business impact, you use a coating ol protection to maintain the threats from your assets. When the assets are badly protected then you’ve a vulnerability to the danger. To enhance the security and reduce the vulnerability that you present security controls, which may be either technical or procedural.

The process of identifying commercial assets, recognizing the threats, assessing the degree of business impact that could be suffered if the threats were to crystallize, and analysing the vulnerabilities is known as operational hazard assessment. Implementing suitable controls to put on a balance between usability, security, cost along with other business needs is called operational hazard mitigation Operational hazard assessment and operational hazard mitigation collectively comprise what can be call til operational risk management. Later chapters in this book examine operational risk management and will help you deal with actual incidents such as people trying to watch the BBC abroad on your internal VPN server!  The main thing you will need to comprehend this stage is that hazard management. All about identifying and prioritizing the dangers throughout the hazard assessment l procedure and degrees of control in line with these priorities.

Security and Perfomance – Monitoring User Activity

When analysing your server’s overall performance and functionality one of the key areas to consider is that of user activity.  Looking for unusual user activity is a sensible option in identifying potential system problems or security issues.  When a server log is full of unusual user activity you can often use this information to track down the potential issues very quickly.  For example by analysing these issues from your system logs then you can often identify trends in authentication, security problems and application errors.

Monitoring user access to a system for example will allow you to determine usage trends such as utilization peaks.   Often these can cause many sorts of issues, from authentication problems to very specific application errors.  All of this data will be stored in different logs depending on what systems you are using, certainly most operating systems will record much of this by default.

Using system logs though can be difficult due to the huge amount of information in them. It is often difficult to determine which is relevant to the health and security of your servers.  even benign behaviour can look suspicious to the untrained eye and it is important to use tools to  help filter out some of the information into more readable forms.

For example if you see a particular user having authentication problems every week or so, then it is likely that they are just having problems remembering their passwords.   However if you see a user repeatedly failing authentication over a shorter period of time, it may illustrate some other issues.  For example if the user is trying to access the external network using a German proxy server then there would be an authentication problem as the server would not be trusted.

Looking at issues like this can help determine user activity that causes a security breach.  Obviously it is important to be aware of the current security infrastructure in order to interpret the results in these logs correctly.   Most operating systems like Unix and Windows allow you to configure the reports to record different levels of information ranging from brief to verbose.

If you do set logs to record verbose information it is advisable to use some sort of program to help analyse the information efficiently.  There are many different applications which can allow you to do this, although some of them can be quite expensive.  There are simpler and cheaper options though, for example the Microsoft Log Parser is a free tool which allows you to run queries against event data in a variety of formats.

Log parser is particularly useful for analysing security events, which are obviously the key priority for most IT departments in the current climate.    These security and user authentication logs are the best way to determine whether any unusual activity is happening on your network.  For example anyone using an stealth VPN or IP Cloaker like this one, will be very difficult to detect by looking at raw data from the wire.  However it is very likely some user authentication errors will be thrown up from using an external server like this.  For instance most networks restrict access to predetermined users or ip address ranges and these errors can flag up behaviour very quickly.

No Comments Networking, Protocols, VPNs

Code Signing – How it Works

How do you think that users and computers can trust all this random software which appears on large public networks?  I am of course referring to the internet and the requirement most of us have to download and run software or apps on a routine basis.  How can we trust that this is legitimate software and not some shell of a program just designed to infect our PC or steal our data?  After all even if we avoid most software, everyone needs to install driver updates and security patches.

The solution generally involves something called code signing which allows companies to assure the quality and content of any file released over the internet.    The software is signed by a certificate and as long as you trust the certificate and it’s issuer then you should  be happy to install the associated software.    Code signing is used by most major distributors in order to ensure the quality of released software online.

Code Signing – the Basics
Coed signing simply adds a small digital signature to a program, an executable file, an active X control, DLL (dynamic link library) or even a simple script or java applet. The crucial fact is that this signature seeks to protect the user of this software in two ways:

Digital signature identified the publisher, ensuring you know exactly who wrote the program before you install it.

Digital signature allows you to determine whether the code you are looking to install is the same as that was released. It also helps to identify what if any changes have been made subsequently.

Obviously if the application is aware of code signing this makes it even simpler to use and more secure. These programs can be configured to interact with signed/unsigned software depending on particular circumstances. One simple example of this are the security zones defined in Internet Explorer. They can be configured to control how each application interacts depending on what zone they are in. There can be different rules for ‘signed’ and ‘unsigned’ applications for instance with obviously more rights assigned to the ‘signed’ applications.

In secure environments you can assume that any ‘unsigned’ application is potentially dangerous and apply restrictions accordingly. Most web browsers have the ability to determine the difference between these applications and assign security rights depending on the status. It should be noted that these will be applied through any sort of connection or access, even a connection from a live VPN to watch the BBC!

This is not restricted to applications that operate through a browser, you can assign and control activity of signed and unsigned applications in other areas too.  Take for instance device drivers, it is arguably even more important that these are validated before being installed.  You can define specific GPO settings in a windows environment to control the operation and the installation of a device driver based on this criteria.  These can also be filtered under a few conditions, for example specifying the proxy relating to residential ip addresses.

As well as installation it can control how Windows interacts with these drivers too,  although generally for most networks you should not allow installation of an unsigned driver.  This is not always possible though, sometimes application or specialised hardware will need device drivers where the company hasn’t been able to sign the code satisfactorily.   In these instance you should consider carefully before installing and consider the source too. For example if you have downloaded from a reputable site using a high anonymous proxies to  protect your identity then that might be safer than a random download from an insecure site, there is still a risk though.

Preparing PKI in a Windows Active Directory Environment

If you’re installing and implementing internet access for an internal windows based network then there’s two important factors you should consider.  Firstly  it’s important to ensure that your perimeter is protected and access is only allowed through a single point.  This might seem trivial but it’s actually crucial to ensure that the network can be controlled.  Any network which has thousands of individual clients accessing the internet directly and not through a proxy is going to be almost impossible to protect.

The second aspect relates to the overall client and server security – ensure that your windows environment has the Active directory enabled.  This will also allow you to implement the Microsoft Windows PKI.   From Windows 2003 onwards this is already included and PKI is preconfigured in the Windows 2003 schema whether you wist to implement it or not.

If you are considering using Windows PKI then remember although the active directory is a pre-requisite for a straightforward installation, it does not require a domain functional level or even a functioning forest to operate in.   In fact the only configuration you require in the later versions of Windows is to change the Cert Publishers group which is needed in any multi-domain.  This group is pre-populated as a domain local group in each domain in an Active directory forest by default.

This is how PKI is implemented, you can allow any enterprise level certificate authority (CA) the rights to publish certificates to any user object in the current forest or to the  Contact  object in foreign forests.   Remember to enable the relative permissions by adding the CA’s computer account to each domain’s Cert Publisher group.  This is essential as the scope of this group has changed from a global group to a domain local group, but this allows the group to include members of the computer accounts from outside the domain.  This means that you can add computers and user groups for external access by including an external gateway.  For example if you wanted to proxy BBC streams and cache them you could include the proxy server in this group in order to minimize authentication traffic.

You are unable to currently deploy the Windows Server Enterprise CAs in Non- Active Directory environments. This is because the Certificate Authority requires the existence of the AD in order to store configuration information and certificate publishing.  You can install Windows Server PKI in a non-AD environment , however each CA in the PKI hierarchy must be standalone.  This is workable in smaller environments but can be a real challenge to configure communications in large or distributed networks across many network subnets.  Trying to ensure that the right Certificate Authority is assigned across a multinational network is difficult without the Active Directory.  Remember you may have clients and servers requesting authentication from different networks in a UK company you might have a client desktop with an Irish IP address seeking authentication from a London based standalone CA in a different domain.

 

Securing the Internal Network

Twenty years ago this wasn’t really much of an issue, a simple network, a couple of file servers and if you were luck an email system.   Security was never much of an issue, which was just as well because sometimes there wasn’t much you could do anyway.  If anyone remembers the forerunner of Microsoft Exchange – the Microsoft Mail post offices were installed in open shares and if you started locking them down everything stopped working.   You could make some minor security implementations but most of all you had to be careful that you didn’t leave anything in these open shares.

Of course, Unix, Ultrix and the forerunner of Windows NT all had reasonable levels of security and you could apply decent access controls based on users, groups and domains without too much issue.  It was more the applications that were the issue, security in a digital environment was very much in it’s infancy.  Nowadays of course, everyone takes security much more seriously in this age of data protection, hackers, viruses and cyber criminal attacks all over the place.  It’s still a nightmare to lock down environments though and that’s primarily due to the internet.

IT departments all over the world love the internet, solving issues and fixing problems is made a hundred times easier with a search engine at hand.  However that’s one side of the coin, the other is the fact that access to the internet makes configuration and security much more important and potentially more challenging.  Imagine every single desktop has the capacity to visit, download and distribute any number of malevolent files.   A potential virus outbreak sits on everybody’s desk and when you look at some of the users you could only be scared.

So what sort of methods do we have to minimize the potential chaos to our internal network.  Well first of all there’s something not that technology based, a document which details how people must use their computers and especially the internet.  Making sure that users are educated about the risks to both the network and their employment status is probably the most important step you can take to reduce risk from outside sources.   If they no that they could get fired for downloading or streaming video from sites like the BBC via their company VPN then they’re much likely to do it.

There’s still a need to implement access control lists and secure resources of course but user compliance goes a long way.  Principles like giving user the least amount of permissions makes sense in securing resources.  You can lock down both PCs, browsers and external access through Windows environments and GPO (Group Policy Objects).  Routing all internet access through central points is a sensible option, meaning not only can you control but also monitor internet traffic in both ways.  This is also a useful way of applying a second layer of security as regards Antivirus – scanning before it reaches your desktop solutions.

Most secure environment also put in other common sense steps like not allowing users to plug in their own hardware onto the network.  This sounds a trivial matter but can effectively bypass your whole security infrastructure if a virus ridden laptop is installed on your internal network.    You have no control over what that their hardware is used for, they may be downloading torrents and buying alcohol/drugs from the darkweb when they get home.   Ensuring data security can also be managed by ensuring that no-one uses or takes away data using USB sticks and memory cards.  There are security settings and applications which can manage these devices quite easily now, also using group policy if you’re running a windows environment and have implemented the active directory

No Comments Networking, Protocols, VPNs

Issues on Blocking VPN Access from Networks

People love using VPNs for a variety of reasons but if you’re the administrator of any network they can be a real problem. Of course, the primary function of a VPN is security and if users simply used the VPN to encrypt and secure their data then that would be fine. However in reality what you’ll really find is users connecting through a VPN in order to bypass blocks or access sites normally restricted by your network rules. Using a VPN service watch UK TV is a common issue in our US/European network.

The problem is that these sites and activities are blocked for a reason. Having twenty people streaming the latest episode of ‘Strictly’ over the companies network uses the same bandwidth as about a 100 ordinary users simply working. It doesn’t matter that the traffic is being carried over the VPN it still uses our own bandwidth to deliver to the client. So it’s hardly surprising that we need to restrict the use of these VPN clients and the issues they cause. Here’s an example of what people can use these VPN services to do and the problems we can have in blocking them –

As you can see in this particular VPN service called Identity Cloaker there are lots of configuration options which can be used to hide the use of the service. Most of the recommended measures rely on blocking the standard footprints of a VPN service, but as you can see when you are able to switch outgoing ports and create a non-standard configuration it becomes much harder.

There is little in the data you can pick up on so those content filters are pretty much useless. The problem here is that most VPNs are encrypted so that even the destination address is encrypted (although obviously not the IP address). It’s simple to block the web based proxies and VPN services simply by restricting access to their URLs but these clients are much more difficult.

As you can see most services usually have the option to switch between hundreds of different IP addresses even doing so automatically. This is another way you can identify a simple proxy or VPN looking for consistent traffic patterns and single IP addresses. Filtering access to a VPN service which automatically switches server and IP address every few minutes is extremely difficult. Unless they do something with a distinct pattern and very heavy usage like anonymous torrenting then any footprint is almost impossible to detect.

Most administrators usually adopt an attitude of blocking the simplest VPN access and leaving it at that. The reality is that a technical user who is using a sophisticated VPN service like Identity Cloaker is going to be very difficult to stop. You should rely on enforcing User policies within the network and stressing the penalties if people are found using such services.

One other method to consider is ensuring that most users are not able to install or configure the VPN clients on their local laptops or computers. These can normally enforced very easily particularly in Windows environments. Simply configure local user policy and apply restrictive Group Policy settings to remove admin access to users. Unfortunately programs like Identity Cloaker also come with a ‘lite’ version which don’t need installing and can be run directly from a single executable. It can even be run from a memory stick and still interact with the network stack on the local computer.

What Is VPN?

The remote server would access the request, then authenticate through something like a username and password. The tunnel would be established and used to transfer data between the client and server.

If you want to emulate a point to point link, the data must be wrapped with a header – this is normally called encapsulation. This header should provide essential routing information which enables the data to traverse the public network and reach it\’s intended endpoint. In order to keep the link private on this open network all the data would normally be encrypted. Without this route information the data would never reach it\’s intended destination. The encryption ensures that all data is kept confidential. Packets that are intercepted on the shared or public network are indecipherable without the encryption keys. The link in which the private data is encapsulated and encrypted is known as a VPN connection.

One of the most important uses of remote access VPN connections is that it allows workers to connect back to their office or home by using the shared infrastructure of a public network such as the internet. At the users point, the VPN establishes an invisible connection between the client and the organisation’s servers. There is normally no need to specify and aspects of the shared network as long as it is capable of transporting traffic, the VPN tunnel controls all other aspects.   This does mean it’s very difficult to block these VPN connections as the BBC is discovering.

These connections are also known as router to router connections which are established between two fixed points. They are normally setup between distinct offices or based again using the public network of the internet. The link will operate in a similar way to a dedicated wide area network link, however at a fraction of the costs of a dedicated line. Many companies use these increasingly in order to establish fixed connections without the expense of WAN connections. It should be noted that these VPN connections operate over the data link layer of the OSI model.

One of the problems many network administrators find is that users on networks can set up their own VPN connections.  These can be very difficult to detect and allow a direct tunnels into a corporate network especially as they are often used for trivial issues such as obtaining an IP address for Netflix.  Needless to say having users stream encrypted videos streams to their desktops is not good for network performance or security.

Remember a site to site connection will establish a link between two distinct private networks. The VPN server will ensure that a reliable route is always available between the two VPN endpoints. One of the routers will take the role of the VPN client, by requesting the connection. The second server will authenticate and then reciprocate the request in order for the tunnel to be authenticated at each end. In these site to site connections, the packets which are sent across the routers will typically not be created on the routers but clients connected to these respective devices.