For those of us who grew up with a selection of cables, leads and analogue modems PPP was quite a common protocol. It was developed across the internet community to both encapsulate and transmit IP data across all sorts of links but initially serial point to point ones. The other popular scheme which to some extent where often interchangeable was SLIP (Serial Link Internet Protocol). Although SLIP was the original of these two protocols, there is little doubt that PPP was more common mainly because it offered the ability to interconnect with other protocols. The main advantage of this was the ability to work with IPX which enabled it to function in Novell networks for example.
PPP is extremely adaptable and allowed connections from routers and hosts between each other. In it’s earliest guise though it was most commonly used to enable internet connections over telephone dial up lines. Most modem software would offer the user the choice to connect via either SLIP or PPP however the latter was normally the default.
Using PPP the home user would dial into a server run by their ISP using the telephone line. After the modem has established the connection, the PPP session would allow user authentication to check the account. This part of the process would also assign an IP address to the user’s computer. This address is essential to communicate across the internet and essential to access any of the internet. In fact all web based activities from browsing a page to watching UK TV in USA need a valid IP address assigned to your computer or device.
When this exchange has taken place the user’s computer is effectively an extension of the ISP’s IP network in the same way as it might be connected using an ethernet cable plugged into a port. The serial port and modem have exactly the same functionality as any other network card plugged into the network.
In order to encapsulate high level protocol data and transmit them then PPP has to use a simple framing method. Using this method PPP can support data transmission using a physical cable in asynchronous and synchronous modes. This obviously operates over the physical layer and needs serial communication protocols to transmit too. The data link layer is managed on the same frame structure using HDLC, it uses a Link Control Protocol to establish and manage the links when established. This is also responsible for encapsulation methods and packet sizes, also the compression methods that might be available.
The other important function is of course user authentication primarily using simple usernames and passwords. LCP is able to verify or reject packets based on any of these criteria and can manage the configuration options. A network control protocol is used to further manage the type of protocol configuration and the data being transferred between the two hosts. Remember there is no client/server model both ends of the connection are considered equal and the protocol is responsible for managing the connection not either of the two connection end points.
BBC Blocking VPNs – http://www.iplayerabroad.com/2017/04/07/bbc-iplayer-blocking-vpn-2017/
In simple terms a virtual circuit is a dedicated communication line between two end points usually on a packet switched or cell relay network. A common use is to provide a temporary or dedicate link through a router or switch connected network. Any devices along the circuit will be programmed with the specific circuit number so that when packets arrive the switch has the correct information to forward them. This saves the potentially lengthy process of examine the packet header in detail.
Using a predefined path like this can improve performance substantially and also reduces the size of frames and packets specifically by ensuring the header sizes are much smaller. The underlying physical routes of these connections may change in a standard packet switching network however the two end stations will retain a connection and update paths as appropriate. Typically this could happen when the network is experiencing congestion or perhaps some sort of physical problem with a downed line.
There are two main types of virtual circuits which can be described as follows:
PVC: (Permanent Virtual Circuit) – a connection between end points defined in advance. often with a predetermined bandwidth and speed allowance. In commercial public switched carrier networks (like ATM) or frame relay the customers will be allocated the endpoints of the PVC in advance. In internal networks the administrators create the PVCs to direct applications or certain traffic to specific parts of the network. For example a common use would be to retain bandwidth and define a network path for video enabled applications such as video conferencing. Video needs specific quality to operate correctly so it makes sense to define specific routes – although this could also be done to block access to external video applications like Netflix.
SVC: (Switched Virtual Circuit) – an on-demand connection which is normally temporary between two stations. An easy way to visualize an SVC is something like a phone call which is a temporary connection created to transfer voice. Connections on an SVC will only last as long as necessary to complete the transaction, they are then taken down. Many carriers let customers establish these ‘on the fly’ or a carrier may set up a number of defined SVCs which can be used when required. Perhaps these could be useful for establishing internal secure channels such as a VPN or IP Cloaker application.
It’s best remembered that PVCs are most effective when there is a large amount of specific data anticipated between two locations on a regular basis. Using an SVC is much more suitable for temporary or recurring connections for example unscheduled video or voice conferences. Most commercial carriers prefer to set up PVCs because they are easier to manage bandwidth requirements in advance than SVCs. It is very common for PVCs to have monthly costs, rates or bandwidth allowances assigned to them making it easier to allocate costs and budgets against them.
Many IT administrators use proxies extensively in their networks, however the concept or reverse proxying is slightly less common. So what is a reverse proxy? Well, it refers to the setup where a proxy server like this is run in such a way that it appears to clients just like a normal web server.
Specifically, the client will connect directly to the proxy considering it to be the final destination i.e. the web server itself, they will not be aware that the requests could be relayed further to another server. It is possible that this will even be an additional proxy server. These ‘reverse proxy servers’ are also often referred to as gateways although this term can have different meanings too. To avoid confusion we’ll avoid that description in this article.
In reality the word ‘reverse’ refers to the backward role of the proxy server. In a standard proxy, the server will act as a proxy for the client initially. Any request by the proxy is made on behalf of the received client request. This is not the case in the ‘reverse’ scenario because because it acts as a proxy for the web server and not the client. This distinction can look quite confusing, as in effect the proxy will forward and receive requests to both the client and server however the distinction is important. You can read RFC 3040 for further information on this branch of internet replication and caching.
A standard proxy is pretty much dedicated to the client’s needs, all configured clients will forward all their requests for web pages to the proxy server. In a standard network architecture they will normally sit fairly close to the clients in order to reduce latency and network traffic. These proxies are also normally run by the organisations themselves although some ISPs will offer the service to larger clients.
In the situation of a reverse proxy, it is representing one or a small number of origin servers. You cannot normally access random servers through a reverse proxy because it has to be configured to specifically access certain web servers. Often these servers will need to be highly available and the caching aspect is important, a large organisation like Netflix would probably have specific IP addresses (read this) pointing at reverse proxies. The list of servers that are accessible should always be available from the reverse proxy server itself. A reverse proxy will normally be used by ‘all clients’ to specifically access certain web resources, indeed access may be completely blocked by any other route.
Obviously in this scenario it is usual for the reverse proxy to be both controlled and administered by the owner of the origin web server. This is because these servers are used for two primary purposes to replicate content across a wide geographic area and two replicate content for load balancing. In some scenarios it’s also used to add an extra layer of security and authentication to accessing a secure web server too.
For many people, there is a very strong requirement to mask their true identity and location online. It might be for privacy reasons, perhaps to keep safe or you simply don’t want anyone to log everything you do online. There are other reasons, using multiple accounts on websites, IP bans for whatever reason or simple region locking – no you can’t watch Hulu on your holidays in Europe. The solution usually now revolves around hiding your IP address using a VPN or proxy as a minimum.
Residential IP Address
Yet the choice doesn’t end there, proxies are pretty much useless now for privacy and security. They’re easily detected when you logon and to be honest of very little use anymore. VPN services are much better, yet even here it’s becoming more complicated to access media sites for example. The problem is that it’s not the technology that is now the issue but the originating IP address. These are actually classified into two distinct groups – residential and commercial which can both be detected by most websites.
A residential IP address is one that appears to come from a domestic account assigned from an ISP. It’s by far the most discrete and secure address to use if you want to keep completely private. Unfortunately these IP addresses are difficult to obtain in any numbers and also tend to be very expensive. Bottom line is the majority of people for whatever reason who are hiding their true IP address do it by using commercial addresses and not residential ones. It is possible to buy residential IPs though and indeed there are a few companies with residential proxies for sale.
Most security systems can easily detect whether you are using a commercial or residential vpn service provider, how they use that information is a little more unsure. So at the bottom of the pile for security and privacy are the datacentre proxy servers which add no encryption layer and are tagged with commercial IP addresses.
Do I really need a residential VPN IP Address? Well that depends on what you are trying to achieve, for running multiple accounts on things like craigslist and Itunes – residential is best. If you want to try and access the US version of Netflix like this, then you’ll definitely need a residential address. Netflix last year filtered out all commercial addresses which means that very few of the VPNs work anymore, and you can’t watch Netflix at work either.
If you just want to mask your real IP address then a commercial VPN is normally enough. The security is fine and no-one can detect your true location, although they can determine you’re not a home user if they check. People who need to switch IPs for multiple accounts and using dedicated tools will probably be best advised to investigate the residential IP options.
There are a few companies who can supply cheap residential proxies which are relatively easy to access. However the cost of residential IP addresses can get quite high if you require any number. The solution to this is to use something called residential backconnect proxies which have the ability to rotate through thousands of residential IP addresses automatically.
It is arguably the most important function of a web proxy at least as far as performance is concerned and that’s on-demand caching. That is documents or web pages which are cached upon request by a client or application. It’s important to remember that a document can only be cached if it has actually been requested by a user. Without a request, it will not be cached and indeed the proxy server will not even be aware of it’s existence.
This is a different method than using a replication model which is typically used to distribute data and updates. This is more often used on larger, busier networks where data can be replicated onto specific servers, this method is also known as mirroring and also useful for sharing over the internet. One of the most common examples for mirroring is when a large software package is being distributed instead of a single server being responsible, multiple duplicates are replicated onto different servers.
One of the best ways to facilitate performance increases is to use a method called round-robin DNS. This involves mapping a single host name to multiple physical servers. These servers must be assigned separate IP and physical addresses and their addresses distributed evenly among the clients requesting the software. When using the DNs method, the clients will be unaware of the existence of multiple servers because they will appear as a single logical server.
Most of the caching solutions used by proxies are centred around removing the load on a specific server. However when a proxy caches resources locally without mirroring or replication then it’s still the single server which is responsible. The physical loads doesn’t decrease however it does reduce the number of network requests that the server has to implement. This also reduces the number of name requests that the server makes which can also introduce some levels of latency.
Having caching enabled can reduce the speed of the server responses significantly. However this does depend largely on the sort of requests that are made, imagine a proxy used specifically to obtain a Czech IP Address and directly download a specific resource. Caching that resource locally would improve the speed significantly as long as the content didn’t change much, however this would be different for sites which stream audio or video and contained large amounts of multimedia content.
The SSL Tunneling Protocol allows any proxy server which supports it the ability to act as a tunnel for SSL enhanced protocols. This feature is essential to support normal web traffic and increasingly SSL is being used to secure normal web requests which would previously have been sent in clear text. The client makes the initial HTTP request to the proxy and asks for an SSL tunnel. If we look at the protocol level the actual handshake to establish the SSL tunneling connection is fairly straight-forward.
The connection is simple and in fact looks like virtually any other HTTP request, the only difference is that we use a new ‘Connect’ method. The format is also slightly different as the parameter is not a full url but rather the destination host address and the post number in the format 192.168.1.1:8080. The port number is always required with these connection requests, as the default number is generic and not always correct.
When the client has received a successful response then the connection will pass all data in both directions to the destination server. For the proxy server much of it’s role in authentication and establishing the connection is over, and it’s role is then limited to simply forwarding data for the connection. The final significant role for the proxy server is to close the connection which it will do when it receives a close request from either the client or the server.
Other situations where the connection will be closed mainly refer to error status codes. For example an error generated in response to authentication would be a typical situation where authentication has failed. Most proxies will require some sort of authentication especially the high quality US proxies such as this. The methods might change however from a simple username password supplied via a challenge and response to pass through authentication from a system like the Active Directory or LDAP.
It’s interesting to note that the mechanism used to handle SSL tunnelling is not actually specific to this protocol. It is in fact a generic technique which can be used to tunnel any protocol including SSL. There is no actual reliance on any SSL support from the proxy, which can be confusing when you see people look for SSL enabled proxies online. It is not required on a properly configured proxy server as the data is simply transported there is no need for the actual protocol to be understood after the initial connection request.
There are issues with some protocols transferring through proxies, certain specialised protocols need more support than is offered with the standard tunneling mechanism. For example for many years LDAP (Lightweight Directory Access Protocol) was not able to work across most common proxies. Some implementations support LDAP by using SOCKS while there is some difficulty with LDAP queries being cached and subsequently causing performance issues. Most protocols however work perfectly with this ‘hands off’ tunneling mechanism which you can see perfectly illustrated if you try and stream video through proxies like this which used to circumvent BBC iPlayer blocked abroad.
There is little excuse for not installing an IDS (Intrusion Detection System) on your Network, even the usual culprit of budget doesn’t apply. In fact one of the leading IDS systems called Snort is actually available completely free of charge and is sufficient for all but the most very complex network infrastructures. It is virtually impossible to effectively monitor and control your network, particularly if it’s connected to the internet, without some sort of IDS in place.
There are certain questions about the day to day operation of your network that you should be able to answer. Questions like the following will help you determine if you really have control over your network and it’s hardware =
- Can you tag and determine how much traffic on your network is associated with malware or unauthorised software.
- Are you able to determine which of your clients do not have the latest client build?
- Can you determine which websites are most popularly requested. Are these requests from legitimate users or as a result of malware activity.
- Can you determine which users are the top web surfers (and is it justified).
- How much mail are your SMTP server’s processing?
It is surprising how many network professionals simply wouldn’t have a clue about obtaining this information from their network however, it’s impossible to ensure that the network is efficient without it. For example a few high intensive web users can create much more traffic than the majority of ordinary business users. Imagine two or three users in a small department who used a working BBC VPN to stream TV to their computer 8 hours a day. The traffic that would generate would be huge and could easily swamp an important network segment.
All security professionals should ensure that they have the tools and reporting capacity to answer simple questions like this about network usage. Knowing the answers to these questions, will help control and adapt your network to meet it’s users needs. Of course a simple IDS won’t provide the complete solution but it will help keep control in your network. Malware can sit and operate for many weeks in a network which is not monitored properly. This will heavily impact performance and can enable it to spread to other devices and eventually other networks. In network environments where performance is important, then being aware of the sorts of situations can make a huge difference.
Network Professional and Broadcaster on author of BBC News Streaming.
In these times when security is becoming ever more important the SSL Tunneling Protocol is extremely important, it allows a web proxy server to act as a tunnel for SSL enhanced protocols. The protocol is used when any connected client makes a HTTP request to the proxy server and asks for a SSL tunnel to be initiated. On the HTTP protocol level, the handshake required to initiate the SSL tunneling connection is simple. There is little difference to an ordinary HTTP request except that a new ‘Connect’ method is used and the parameter passed is not a full URL but instead a destination port number and hostname separated by a colon.
The port number is always required with ‘CONNECT’ requests because the tunneling method is generic and there is no protocol specified, hence default port numbers cannot be used reliably. The general syntax for the request is as below ;
CONNECT <host>:<port> HTTP/1.0
HTTP Request Headers
The successful response would be a connection established message, followed by another empty line. After the successful response the connection will then pass all the data transparently to the destination server and pass through any replies from the server. In practice what is happening is the proxy is validating the initial request, establishes the connection and then takes a step back. After this initial stage the proxy merely forwards data back and forth between the client and the server. If either side closes the connection then the proxy will cause both connections to be closed and no mor tunneling will take place until a new connection is established between the server and client.
The proxy does have the ability to respond to error messages within the SSL tunnel. If this error is generated in the initial stages then the connection will not be established, if it is already connected then the proxy will close the connection after the error response has been sent. However it is important to remember especially where security is important that this SSL tunneling protocol is not specific to SSL and therefore has no in depth security. The tunnelling mechanism used in this instance is a generic one and can in fact be used for any protocol. This means that there is no requirement either for the proxy to support SSL either as the server is merely establishing a connection and then forwarding data without any processing.
BBC Iplayer Ireland – Here’s How you Can Watch
The network layer of the OS Protocol stack is often simply known as Layer 3. It is important for network troubleshooting as it is where routing takes place one level above the data link layer (Layer 2) which is where switching and bridging happens. A VLAN (virtual LAN) is a subnetwork of an internetwork however it is normally defined using a switched network topology.
So what do we mean by a switched network? Well simply put it is a series of devices such as computers attached directly to some sort of multiport switching device. A network switch acts like a connecting medium between the ports which computers are connected to. In the perfect switching environment each port has only one device connected to it, however in reality it’s likely to be another network device like a bridge or hub which has many more clients indirectly connected to the switch. The perfect scenario has no conflict between different devices trying to use the same network cable, performance is maximized here because there is no waiting or latency while information is transmitted such as you would get on Ethernet. Just like the simple VPNs we use across the internet to watch BBC USA whilst hiding your IP address they VLANs segment and protect traffic.
An important reason for segmenting networks initially then connecting them together again using routers is that it minimizes the size of broadcast domains with fewer devices competing for access. Switched topologies also reduce the level of contention and many networks have to evolve into large flat switched networks. If you remove routers though there is a price to pay both in ease of administration and being able to securely manage specific segments or devices. If you need to retain some sort of topological layout in this scenario, VLANs are probably the only feasible option.
A VLAN restores the advantages of a segmented network to a flat switched network. Network administrators can use VLANs to create pseudo segments in a open network across the switches. This is important for creating security segments and managing large networks as the computers which are joined to the VLAN can exists anywhere on the network. So for example you can create a high security VLAN to connect secured servers together where they can be managed and secured as a group. These servers can exist on different switches, different ports and across buildings and departments.
The next stage is to take these individual VLANs which connect many groups of computers and extend the model. Indeed a device can be a member of multiple VLANs and messages can be broadcast to specific devices by sending them to specific VLANs only. The issue with this setup is that routers still need to transmit packets across these different VLANs, there is still a requirement for data to be transported which can cause contention and performance issues.
Here we see the techniques of Layer 3 switching being useful where a routing algorithm is used to discover the fastest path through the switched network. Once a destination is actually located, a shorter layer 2 switched path can be used. This procedure is possible because the VLANS will actually overlay the physical switching fabric of the network. Obviously there is more to these techniques and indeed the design and construction of efficient switched networks is a large and interesting field.
John Simmons, american version of Netflix? Galsworthy Publications, 2013
The technology sector is at the moment somewhat confused about what a VPN actually is. However the confusion is understandable as the VPN has continually evolved over the last few years into a somewhat different networking technology. In the passed, the VPN could be described as a private network which is able to carry voice and data usually built into existing carrier services.
This is not how a VPN is defined commonly today, it’s probably best to split into the following different definitions.
- Voice VPN – a single carrier which handle all the voice call switching. The ‘virtual’ in VPN here implies that a virtual voice switching network has been created within the switching equipment. This is probably the most dated definition under the concept of traditional carrier based voice vpns.
- Carrier Based Data VPN – Traditional packet, cell switching and frame networks normally carry data in discrete bundles which are then routed through a complex mesh of networks and switches to their destination. These networks would be shared between many owners and users. A VPN would be a web of individual virtual circuits which form a virtual private network over another carriers packet-switched network.
- Internet VPN – this is probably the definition which is most relevant today, similar to the previous carrier based data network. Here an IP network is the underlying transport and the common medium the shared hardware of the internet.
The internet VPN like this is the most common today probably because it is by far the easiest and cheapest one to set up. There might not be the same bandwidth and data quality guarantees than a traditional virtual circuit, however the popularity of simple VPN client and server accessible from anywhere in the world is a powerful tool for many reasons.
What’s more the internet VPN can be created and used by almost anyone without exception. Companies for instance will often install generic VPN client software on their laptops so any employee can dial in to the corporate servers using any internet connection safely and securely. This means that employees can work remotely from almost any location all they need is a simple internet connection and an account on the VPN server.
A decade ago these were used over simple dial up modems but now most countries have a fairly large internet access infrastructure allowing high speed access from most public places and from home internet connection. The other advantage is that an internet VPN requires no real investment in hardware apart from the central server. Users can leverage the internet connection of their ISP or even a hotel wifi access point, a fairly insecure setup but if you connect through a virtual private network then all your data is securely encrypted and protected from prying eyes.