In the early 1980s the International Organization for Standardization started work on a set of protocols designed to promote open network environments. These were essential to allow the multi-vendor computer systems to all talk to each other using internationally accepted communication protocols. These standards and protocols eventually developed into the OSI reference model.
The protocols defined in each layer of the model have different responsibilities but generally these fall into two specific categories.
- Communicating with the same level protocol layer on another computer.
- Providing services to one layer above it.
This peer level communication offers a method in which each layer can exchange messages or other forms of data. The model is the same whether you’re routing through a US IP address to Netflix to a secure communication link to an application server. For instance, the transport protocol is able to send a transmission requesting a pause to a peer computer in the sending computer. It’s able to do this not using a direct connection but by placing a message in the packet where it is managed by the layer below. All lower layers must provide this service to the layer above them taking messages and passing them down to the lowest level of the protocol stack. At this point they can be transmitted across the physical link to the remote destination.
It is important to remember that the OSI is merely a reference model in that it provides a general description of what services should be provided at which layer. The OSI model itself does not provide any specifics of the standard protocols. In fact you’ll often find the OSI model used to define all sorts of other protocols including TCP/IP for example. The Internet Protocol was often described as a network layer protocol, purely because it performs the same functions that are defined in the network layer of the OSI reference model.
The ISO did create some protocols that followed the OSI model, however these were never widely adopted and you’ll be lucky to find them in use anywhere. The main reason for this was the popularity of a rival communication suite – TCP/IP.
BBC Block VPN services – http://www.iplayerabroad.com/2016/07/20/bbc-vpn-block-real/
For those of us who grew up with a selection of cables, leads and analogue modems PPP was quite a common protocol. It was developed across the internet community to both encapsulate and transmit IP data across all sorts of links but initially serial point to point ones. The other popular scheme which to some extent where often interchangeable was SLIP (Serial Link Internet Protocol). Although SLIP was the original of these two protocols, there is little doubt that PPP was more common mainly because it offered the ability to interconnect with other protocols. The main advantage of this was the ability to work with IPX which enabled it to function in Novell networks for example.
PPP is extremely adaptable and allowed connections from routers and hosts between each other. In it’s earliest guise though it was most commonly used to enable internet connections over telephone dial up lines. Most modem software would offer the user the choice to connect via either SLIP or PPP however the latter was normally the default.
Using PPP the home user would dial into a server run by their ISP using the telephone line. After the modem has established the connection, the PPP session would allow user authentication to check the account. This part of the process would also assign an IP address to the user’s computer. This address is essential to communicate across the internet and essential to access any of the internet. In fact all web based activities from browsing a page to watching UK TV in USA need a valid IP address assigned to your computer or device.
When this exchange has taken place the user’s computer is effectively an extension of the ISP’s IP network in the same way as it might be connected using an ethernet cable plugged into a port. The serial port and modem have exactly the same functionality as any other network card plugged into the network.
In order to encapsulate high level protocol data and transmit them then PPP has to use a simple framing method. Using this method PPP can support data transmission using a physical cable in asynchronous and synchronous modes. This obviously operates over the physical layer and needs serial communication protocols to transmit too. The data link layer is managed on the same frame structure using HDLC, it uses a Link Control Protocol to establish and manage the links when established. This is also responsible for encapsulation methods and packet sizes, also the compression methods that might be available.
The other important function is of course user authentication primarily using simple usernames and passwords. LCP is able to verify or reject packets based on any of these criteria and can manage the configuration options. A network control protocol is used to further manage the type of protocol configuration and the data being transferred between the two hosts. Remember there is no client/server model both ends of the connection are considered equal and the protocol is responsible for managing the connection not either of the two connection end points.
BBC Blocking VPNs – http://www.iplayerabroad.com/2017/04/07/bbc-iplayer-blocking-vpn-2017/
In simple terms a virtual circuit is a dedicated communication line between two end points usually on a packet switched or cell relay network. A common use is to provide a temporary or dedicate link through a router or switch connected network. Any devices along the circuit will be programmed with the specific circuit number so that when packets arrive the switch has the correct information to forward them. This saves the potentially lengthy process of examine the packet header in detail.
Using a predefined path like this can improve performance substantially and also reduces the size of frames and packets specifically by ensuring the header sizes are much smaller. The underlying physical routes of these connections may change in a standard packet switching network however the two end stations will retain a connection and update paths as appropriate. Typically this could happen when the network is experiencing congestion or perhaps some sort of physical problem with a downed line.
There are two main types of virtual circuits which can be described as follows:
PVC: (Permanent Virtual Circuit) – a connection between end points defined in advance. often with a predetermined bandwidth and speed allowance. In commercial public switched carrier networks (like ATM) or frame relay the customers will be allocated the endpoints of the PVC in advance. In internal networks the administrators create the PVCs to direct applications or certain traffic to specific parts of the network. For example a common use would be to retain bandwidth and define a network path for video enabled applications such as video conferencing. Video needs specific quality to operate correctly so it makes sense to define specific routes – although this could also be done to block access to external video applications like Netflix.
SVC: (Switched Virtual Circuit) – an on-demand connection which is normally temporary between two stations. An easy way to visualize an SVC is something like a phone call which is a temporary connection created to transfer voice. Connections on an SVC will only last as long as necessary to complete the transaction, they are then taken down. Many carriers let customers establish these ‘on the fly’ or a carrier may set up a number of defined SVCs which can be used when required. Perhaps these could be useful for establishing internal secure channels such as a VPN or IP Cloaker application.
It’s best remembered that PVCs are most effective when there is a large amount of specific data anticipated between two locations on a regular basis. Using an SVC is much more suitable for temporary or recurring connections for example unscheduled video or voice conferences. Most commercial carriers prefer to set up PVCs because they are easier to manage bandwidth requirements in advance than SVCs. It is very common for PVCs to have monthly costs, rates or bandwidth allowances assigned to them making it easier to allocate costs and budgets against them.
Many IT administrators use proxies extensively in their networks, however the concept or reverse proxying is slightly less common. So what is a reverse proxy? Well, it refers to the setup where a proxy server like this is run in such a way that it appears to clients just like a normal web server.
Specifically, the client will connect directly to the proxy considering it to be the final destination i.e. the web server itself, they will not be aware that the requests could be relayed further to another server. It is possible that this will even be an additional proxy server. These ‘reverse proxy servers’ are also often referred to as gateways although this term can have different meanings too. To avoid confusion we’ll avoid that description in this article.
In reality the word ‘reverse’ refers to the backward role of the proxy server. In a standard proxy, the server will act as a proxy for the client initially. Any request by the proxy is made on behalf of the received client request. This is not the case in the ‘reverse’ scenario because because it acts as a proxy for the web server and not the client. This distinction can look quite confusing, as in effect the proxy will forward and receive requests to both the client and server however the distinction is important. You can read RFC 3040 for further information on this branch of internet replication and caching.
A standard proxy is pretty much dedicated to the client’s needs, all configured clients will forward all their requests for web pages to the proxy server. In a standard network architecture they will normally sit fairly close to the clients in order to reduce latency and network traffic. These proxies are also normally run by the organisations themselves although some ISPs will offer the service to larger clients.
In the situation of a reverse proxy, it is representing one or a small number of origin servers. You cannot normally access random servers through a reverse proxy because it has to be configured to specifically access certain web servers. Often these servers will need to be highly available and the caching aspect is important, a large organisation like Netflix would probably have specific IP addresses (read this) pointing at reverse proxies. The list of servers that are accessible should always be available from the reverse proxy server itself. A reverse proxy will normally be used by ‘all clients’ to specifically access certain web resources, indeed access may be completely blocked by any other route.
Obviously in this scenario it is usual for the reverse proxy to be both controlled and administered by the owner of the origin web server. This is because these servers are used for two primary purposes to replicate content across a wide geographic area and two replicate content for load balancing. In some scenarios it’s also used to add an extra layer of security and authentication to accessing a secure web server too.
For many people, there is a very strong requirement to mask their true identity and location online. IT might be for privacy reasons, perhaps to keep safe or you simply don’t want anyone to log everything you do online. There are other reasons, using multiple accounts on websites, IP bans for whatever reason or simple region locking – no you can’t watch Hulu on your holidays in Europe. The solution usually now revolves around hiding your IP address using a VPN or proxy as a minimum.
Yet the choice doesn’t end there, proxies are pretty much useless now for privacy and security. They’re easily detected when you logon and to be honest of very little use anymore. VPN services are much better, yet even here it’s becoming more complicated to access media sites for example. The problem is that it’s not the technology that is now the issue but the originating IP address. These are actually classified into two distinct groups – residential and commercial which can both be detected by most websites.
A residential IP address is one that appears to come from a domestic account assigned from an ISP. It’s by far the most discrete and secure address to use if you want to keep completely private. Unfortunately these IP addresses are difficult to obtain in any numbers and also tend to be very expensive. Bottom line is the majority of people for whatever reason who are hiding their true IP address do it by using commercial addresses and not residential ones.
Most security systems can easily detect whether you are using a commercial or residential vpn service provider, how they use that information is a little more unsure. So at the bottom of the pile for security and privacy are the datacentre proxy servers which add no encryption layer and are tagged with commercial IP addresses.
Do I really need a residential VPN IP Address? Well that depends on what you are trying to achieve, for running multiple accounts on things like craigslist and Itunes – residential is best. If you want to try and access the US version of Netflix like this, then you’ll definitely need a residential address. Netflix last year filtered out all commercial addresses which means that very few of the VPNs work anymore, and you can’t watch Netflix at work either.
If you just want to mask your real IP address then a commercial VPN is normally enough. The security is fine and no-one can detect your true location, although they can determine you’re not a home user if they check. People who need to switch IPs for multiple accounts and using dedicated tools will probably be best advised to investigate the residential IP options.
It is arguably the most important function of a web proxy at least as far as performance is concerned and that’s on-demand caching. That is documents or web pages which are cached upon request by a client or application. It’s important to remember that a document can only be cached if it has actually been requested by a user. Without a request, it will not be cached and indeed the proxy server will not even be aware of it’s existence.
This is a different method than using a replication model which is typically used to distribute data and updates. This is more often used on larger, busier networks where data can be replicated onto specific servers, this method is also known as mirroring and also useful for sharing over the internet. One of the most common examples for mirroring is when a large software package is being distributed instead of a single server being responsible, multiple duplicates are replicated onto different servers.
One of the best ways to facilitate performance increases is to use a method called round-robin DNS. This involves mapping a single host name to multiple physical servers. These servers must be assigned separate IP and physical addresses and their addresses distributed evenly among the clients requesting the software. When using the DNs method, the clients will be unaware of the existence of multiple servers because they will appear as a single logical server.
Most of the caching solutions used by proxies are centred around removing the load on a specific server. However when a proxy caches resources locally without mirroring or replication then it’s still the single server which is responsible. The physical loads doesn’t decrease however it does reduce the number of network requests that the server has to implement. This also reduces the number of name requests that the server makes which can also introduce some levels of latency.
Having caching enabled can reduce the speed of the server responses significantly. However this does depend largely on the sort of requests that are made, imagine a proxy used specifically to obtain a Czech IP Address and directly download a specific resource. Caching that resource locally would improve the speed significantly as long as the content didn’t change much, however this would be different for sites which stream audio or video and contained large amounts of multimedia content.
The SSL Tunneling Protocol allows any proxy server which supports it the ability to act as a tunnel for SSL enhanced protocols. This feature is essential to support normal web traffic and increasingly SSL is being used to secure normal web requests which would previously have been sent in clear text. The client makes the initial HTTP request to the proxy and asks for an SSL tunnel. If we look at the protocol level the actual handshake to establish the SSL tunneling connection is fairly straight-forward.
The connection is simple and in fact looks like virtually any other HTTP request, the only difference is that we use a new ‘Connect’ method. The format is also slightly different as the parameter is not a full url but rather the destination host address and the post number in the format 192.168.1.1:8080. The port number is always required with these connection requests, as the default number is generic and not always correct.
When the client has received a successful response then the connection will pass all data in both directions to the destination server. For the proxy server much of it’s role in authentication and establishing the connection is over, and it’s role is then limited to simply forwarding data for the connection. The final significant role for the proxy server is to close the connection which it will do when it receives a close request from either the client or the server.
Other situations where the connection will be closed mainly refer to error status codes. For example an error generated in response to authentication would be a typical situation where authentication has failed. Most proxies will require some sort of authentication especially the high quality US proxies such as this. The methods might change however from a simple username password supplied via a challenge and response to pass through authentication from a system like the Active Directory or LDAP.
It’s interesting to note that the mechanism used to handle SSL tunnelling is not actually specific to this protocol. It is in fact a generic technique which can be used to tunnel any protocol including SSL. There is no actual reliance on any SSL support from the proxy, which can be confusing when you see people look for SSL enabled proxies online. It is not required on a properly configured proxy server as the data is simply transported there is no need for the actual protocol to be understood after the initial connection request.
There are issues with some protocols transferring through proxies, certain specialised protocols need more support than is offered with the standard tunneling mechanism. For example for many years LDAP (Lightweight Directory Access Protocol) was not able to work across most common proxies. Some implementations support LDAP by using SOCKS while there is some difficulty with LDAP queries being cached and subsequently causing performance issues. Most protocols however work perfectly with this ‘hands off’ tunneling mechanism which you can see perfectly illustrated if you try and stream video through proxies like this which used to circumvent BBC iPlayer blocked abroad.
Most networks of any size need to have some sort of system for storing and managing their log files. Most network devices produce logs and many of them can contain lots of useful information. However without a way of analysing and reporting this data then it can simply become another system administration chore with little or no benefit.
One of the oldest methods of centralising these system messages and logs is by using a syslog server. Syslog messaging was originally used on UNIX system for the logs produced by network devices, applications and operating systems. Most modern network devices can be configured to generate Syslog messages which can be picked up by a server. These messages are normally generated and then transmitted using UDP to a server running a Syslog daemon that can accept the messages.
Over the years more and more devices have been created which cab support and generate Syslog messages. Despite being fairly old technology many firms have started to move away from specialized technology towards simply using a central Syslog server to receive, store and archive messages generated from network devices. These servers can also be used to create automatic notifications if specific critical events are generated – for example if an important default gateway becomes unresponsive. This means that IT support personnel can be made aware of potential issues quickly and often before it affects users directly or at least minimize downtime.
Although there are many other methods of receiving and sending system messages across a network using Syslog has many advantages. For a start it works directly with many reporting technologies and almost all network devices will support the Syslog message format. This is very important because as soon as you have multiple logging formats and messaging you’re faced with the prospect of installing multiple system log servers. This creates a hierarchy which can be difficult to support especially for network support staff who need access to all logs in order to troubleshoot issues.
For example if you have a RAS (Remote Access Server) which is configured to use a different system messaging system from other devices in your network you could miss vital pieces of information. In addition, problems in these servers can be missed and so important devices can suffer longer periods of downtime. Many remote users rely on having access through a good VPN service when travelling in order to connect back from remote networks.
If you do have different devices which don’t support the Syslog standard and aren’t able to get rid of them there are some other options. You can use software like Microsoft’s Log Parser program which can convert many formats into a log message that Syslog can understand.
Author of a Polskie Proxy
There is little excuse for not installing an IDS (Intrusion Detection System) on your Network, even the usual culprit of budget doesn’t apply. In fact one of the leading IDS systems called Snort is actually available completely free of charge and is sufficient for all but the most very complex network infrastructures. It is virtually impossible to effectively monitor and control your network, particularly if it’s connected to the internet, without some sort of IDS in place.
There are certain questions about the day to day operation of your network that you should be able to answer. Questions like the following will help you determine if you really have control over your network and it’s hardware =
- Can you tag and determine how much traffic on your network is associated with malware or unauthorised software.
- Are you able to determine which of your clients do not have the latest client build?
- Can you determine which websites are most popularly requested. Are these requests from legitimate users or as a result of malware activity.
- Can you determine which users are the top web surfers (and is it justified).
- How much mail are your SMTP server’s processing?
It is surprising how many network professionals simply wouldn’t have a clue about obtaining this information from their network however, it’s impossible to ensure that the network is efficient without it. For example a few high intensive web users can create much more traffic than the majority of ordinary business users. Imagine two or three users in a small department who used a working BBC VPN to stream TV to their computer 8 hours a day. The traffic that would generate would be huge and could easily swamp an important network segment.
All security professionals should ensure that they have the tools and reporting capacity to answer simple questions like this about network usage. Knowing the answers to these questions, will help control and adapt your network to meet it’s users needs. Of course a simple IDS won’t provide the complete solution but it will help keep control in your network. Malware can sit and operate for many weeks in a network which is not monitored properly. This will heavily impact performance and can enable it to spread to other devices and eventually other networks. In network environments where performance is important, then being aware of the sorts of situations can make a huge difference.
Network Professional and Broadcaster on author of BBC News Streaming.
For many people, travel is becoming much easier and as a species our geographical horizons are perhaps wider than ever. Inexpensive air travel and soft borders like the European Union means that instead of just looking to work in another city or town, another country is just as viable. The internet of course enables this somewhat, many corporations have installed infrastructure to allow remote or home working which means many people can work from wherever they wish. Instead of sitting in cubicles in vast expensive office space, the reality is that people can work together just as easily using high speed internet connections from home.
Unfortunately there are some issues from this digital utopia, of which most are self inflicted. Instead of being a vast unfettered global communications medium, the internet in some senses has begun to shrink somewhat. Not so much in size but rather an increasing number of restrictions, filters and blocks being applied to web servers across the planet. For instance the company I work for has two main bases one in the UK and the other in Poland, which means there is quite a bit of travel between the two countries. Not surprisingly employees who are working away from home for some time, use the internet to keep in touch with their homelife, yet this can be frustrating.
A common issue is the fact that many websites are not really accessible globally, they are locked to specific regions. Take for example the main Polish TV channel – TVN, it has a fantastic website and a media player by which you can watch all their shows. However a Polish citizen who tries to watch the local News from Warsaw from a hotel in the UK will find themselves blocked, the content is only available to those physically located in Poland. It’s no one off either, this behaviour is shared by pretty much every large media company on the web who block access depending on your location.
There is a solution and for our employees it’s actually quite simple, all they need to do is fire up their VPN client and remotely connect back to their home server in Poland. The instant they do this, their connection looks like it’s based in Poland and all the Polish TV channels will work perfectly. There’s a post about something similar here – using a Polish proxy to watch TVN and some other channels although this one is through a commercial service designed to hide your location. It’s a practice that is becoming increasingly necessary, the more we travel the more we find our online access is determined by our physical location.
The use of proxies and more recently VPNs allows you to break out of these artificial intranets which companies are creating by blocking access from other countries. The idea is that if you have the ability to switch to various VPNs across the world you can effectively take back control and access whatever website you need. Your physical location becomes unimportant again, by taking control of your virtual location you have an huge advantage over other internet users by choosing the location you wish to appear from. There are even some other options now take a look at this UK DNS proxy which does something fairly similar and can be used to watch the BBC and Netflix from outside the UK.
Author of – Does BBC Iplayer Work in Ireland