Packet Sniffing for Beginners

Sometimes there are errors and problems on a network that need in depth analysis. Troubleshooting some issues can be almost impossible without using a tool to investigate deeper such as a packet sniffer. Often you won’t be able to find that issue with a non-responsive share or the reason that your RAS server is so slow is because all your travelling sales people are using it to watch BBC TV abroad when they’re travelling!

If a certain error condition occurs only when the request is coming from an actual client, but not when using telnet, packet sniffing is in order.
Sometimes, using telnet may be complex, because the proxy and origin servers may require authentication credentials to be sent. In those
cases, it is more convenient to use a real Web client that can easily construct those headers. Also, if a problem exhibits itself with a certain client,
but not with others, it is worthwhile to find out exactly what is being sent by the client.

There are a number of packet sniffers. Depending on the operating
system, you may find some of these useful.
° wireshark
° ethereal
° etherfind
° tcpdump
° nettl

Many books and instructions will pick a specific packet sniffer to use so if you’re following a guide use this. One of the most popular is Wireshark which is a fully functional and free packet sniffer often used by professionals instead of more costly commercial options.
Many of the most comprehensive are actually distributed as part of Unix and Linux distributions and you’ll have to refer to the UNIX man pages for instructions for the others.
Example. Let’s say you want to snoop the traffic between the hosts fred’s PC (client) and socrates (server). You can use something like Wireshark to track the traffic between the two endpoints and analyse what’s happening between them.

Of course, this only is useful if you can initially identify which sources to monitor. If you suspect that Fred is using the company proxy for Netflix then you can prove the point easily using a packet sniffer. If you’re not sure then you may have to look first to the network hardware for clues, checking switches and hubs for span ports and plugging into them is a useful tactic. These ports typically mirror all the traffic being carried over the active ports meaning you can use the span port to track all the data on that device.

The ability to specify a port is essential and all decent packet sniffers will allow this. Also you should be able to use switch options to control how the traffic should be dumped. That is to specify exactly what format the traffic should be returned in, this is useful as it helps in the analysis stage. Any packet sniffer which doesn’t do this will make the next stages much harder as the amount of data produced will often be very large.

Proxy – Access Control Methods

When you think initially about access control to a standard proxy one of the most obvious options is tradtional user name and password. Indeed access control by user authentication is one of the most popular methods if only because it’s generally one of the simplest to implement. Not only does it use readily available information for authentication it will also fit neatly in with most corporate networks which generally run on a Windows or Linux platforms. All common OS’s support user authentication as standard and normally using a variety of protocols.

Access control based on the username and group is a commonly deployed feature of proxies. It requires users to authenticate themselves to the proxy server before allowing the request to pass. This way, the proxy can associ- ate a user identity with the request and apply different restrictions based on the user. The proxy will also log the username in its access log, allowing logs to be analyzed for user-specific statistics, such as how much bandwidth was consumed by each user. This can be vital in the world of high traffic multimedia applications and a few users using your Remote access server as a handy BBC VPN service can bring a network to it’s knees.

Authentication There are several methods of authentication. With HTTP, We/9 servers support the Basic authentication, and sometimes also the Digest authentication (see HTTP Authentication on page 54). With HTTPS—— or rather, with any SSL-enhanced protocol—certificate-based authentication is also possible. However, current proxy servers and clients do not yet support HTTPS communication to proxies and are therefore unable to perform certificate-based authentication.

This shortcoming will surely be resolved soon. Groups Most proxy servers provide a feature for grouping a set of users under a single group name. This allows easy administration of large numbers of users by allowing logical groups such as admin, engineering, marketing, sales, and so on. It will also be useful in multinational organisations where individuals may need to authenticate in different countries and using global user accounts and groups. So if a UK based salesman was travelling in continental Europe he could use his UK account to access a French proxy and use local resources.

ACCESS CONTROL BY CLIENT HOST ADDRESS An almost always used access control feature is limiting requests based on the source host address. This restriction may be applied by the IP address of the incoming request, or the name of the requesting host. IP address restrictions can often be specified with wildcards as entire network sub- nets, such as 112.113.123 . * Similarly, wildcards can be used to specify entire domains: * . yoruwebsite.com

Access control based on the requesting host address should always be performed to limit the source of requests to the intended user base.

Using Round Robin DNS

A common method of name resolution is to use a method called round-robin. This method maps a single host name to multiple different physical server machines, giving out different IP addresses to different clients. Load balancing is treated in more detail later in this blog (see name resolution methods). With round-robin DNS, the user is unaware of the existence of multiple servers.

The pool of servers appears to be a single logical server as it has only a single name used to access it. Redirections

Another mechanism available for Web servers is to return a redirection to a parallel server to perform load balancing.

 

For example,

  • upon accessing the URL http: //WWW.mywebsite.com/
  • the main server WWW. mywebsite.com will send an HTTP redirection to URL http: //WWW2. mywebsite.com
  • Another user may be redirected to a different server: http://WWW4. mywebsite.com/

This way, the load can be redirected by the main server WWW to several separate machines Wwwl, WWW2, …, Wwwn. The main server might be set up so that the only thing it does is perform redirections to other servers. There is often a misconception regarding this scheme where it is thought that every request would still have to go through the main server to get redirected to another server.

On the contrary, for any given client, there is only a single initial redirection. After that, all requests go automatically to the new target server, since the links within the HTML text are usually relative to the server where the HTML file actually resides.  It can cause some difficulties in certain situations where there are cached cookies for example, perhaps if you access one of the many BBC servers to watch Match of the Day online like this site.

With this method, the user is aware of the fact that there are several servers, since the URL location field in the client software will display a different server name than originally accessed. This is usually not an issue, though. The entry point to the site is still centralized, through the main server, and that’s the only address they ever have to remember.

However, bear in mind that users may place a bookmark in the client software pointing to one of the several servers sharing the load-—not the main server. This means that once a server name is introduced, say WWW4, there may forever be references to that machine on users’ bookmark files.   Remember that these destinations may be slightly different if the web page is accessed through a bookmark so don’t expect the exact same result.

Although using this round robin method for name resolution is extremely common, don’t assume it’s always deployed.  There are many other methods using variations of this method including different types of redirection or mirroring.

John Hughes

Website

No Comments Networking, Protocols

Planning your Security Assessment

Starting a full security risk assessment in any size of organisation can be extremely daunting if it’s something you’ve never tried before. However before you get too involved in complicated charts, diagrams and long drawn out forms and flowcharts it’s best to take a step back. There’s a simple goal here and that’s to try and assess and address any security risks in your organisation. It’s presumably a subject you will have some opinion and knowledge about so try and focus and don’t turn the exercise into something too complicated with little practical use.

Many people, when questioned as part of a risk assessment will prepare an answer, they will start to look at the nuts and bolts of the system. They’ll give opinions on just how this and that widget is weak, and how someone can get access to them and people documents, and so forth and so on. That’s just a technical evaluation of the system, which might or might not be useful. Whether or not it’s useful will be based on the answer to an essential question. The experienced safety professional will have asked this question before answering the enquirer.  If the system is not being used for it’s intended purpose that’s a completely different issue but it obviously would impact security in certain instance.

For example if company PCs are being used to stream video or route to inappropriate sites to watch ITV Stream abroad whilst at work, this introduces additional risks.  Not only could the integrity of the internal network be affected, the connection will also effect the speed while streaming large amounts of video across the network.  There is no doubt that this behaviour should be flagged if encountered within the assessment although it’s not a primary function of the investigation.

The important question is: What do you mean by secure?  Security is a comparative term. There’s not any absolute scale of unhappiness or level of security. Both conditions, secure and security only make sense when translated as attributes of something you consider precious. Something that’s somehow the risk needs to be secured. How much security does this need? . Well that depends on the value and upon the operational threat. How do you measure the operational threat? . Today you’re getting into the real questions which will lead you to an understanding of what you actually mean by the term secure. Measuring and prioritizing business risk security is utilized to defend things of value.

At a business environment things which have value are usually called assets. If assets are somehow damaged or destroyed, then you may suffer a business impact. The prospective event by which you are able to suffer the harm or destruction is a danger. To prevent threats from crystallising into loss events that have a business impact, you use a coating ol protection to maintain the threats from your assets. When the assets are badly protected then you’ve a vulnerability to the danger. To enhance the security and reduce the vulnerability that you present security controls, which may be either technical or procedural.

The process of identifying commercial assets, recognizing the threats, assessing the degree of business impact that could be suffered if the threats were to crystallize, and analysing the vulnerabilities is known as operational hazard assessment. Implementing suitable controls to put on a balance between usability, security, cost along with other business needs is called operational hazard mitigation Operational hazard assessment and operational hazard mitigation collectively comprise what can be call til operational risk management. Later chapters in this book examine operational risk management and will help you deal with actual incidents such as people trying to watch the BBC abroad on your internal VPN server!  The main thing you will need to comprehend this stage is that hazard management. All about identifying and prioritizing the dangers throughout the hazard assessment l procedure and degrees of control in line with these priorities.

So what’s a Digital Certificate?

We’ve probably all seen those simple diagrams where an electronic signature authenticates the key pair used to create the signature. For electronic commerce, authenticating a key pair might not be adequate. For business transactions, each key pair needs to be closely bound to the consumer that owns the key pair. An electronic certificate is a credential that contrasts an integral pair to the entity that owns the key pair. Digital certificates are issued by certification authorities,
therefore we trust the binding prescribed with the certificate.

A Digital signature is fine for verifying e-mail, but stronger verification methods are needed to associate an individual, like the demonstration in our earlier post where we used it to allow access to an app for watching the BBC News abroad. to the binary bits on the network that are purporting to “belong” to Tom Smith. For electronic commerce to work, the association has to be of a power that is legally binding. When Tom Smith has an electronic certificate to advertise to the planet at large, he is in possession of something which might take more trust than the “seal” that was made by his own digital signature.

You might trust his digital signature, but what if a few other believed authority had trust in Tom Smith?
Wouldn’t you then trust Tom Smith a little more? A digital certificate is given by an organization that has a reputation to defend. This organization, known as the certificate authority (CA), may be Tom’s employers, an independent organization, or the government. The CA will take measures to set some truths about Torn Smith before issuing a certificate because of him.

The certificate will normally hold Tom’s name, his public key number, the serial number of the certificate itself, and validity dates (issue and expiry). It’ll also bear the name of the issuing CA. The whole certificate is digitally signed by the CA’s own private key.

Lastly we’ve achieved a mechanism which may be used to allow individuals who’ve no previous relationship to set each other’s identity and participate in the legal transactions of electronic commerce. It’s certainly more efficient and secure than using something like geo-location which simply determines your identity based on your location. So for example, a web site might determine nationality by using your network address – e.g a British IP address needed to access the BBC online.

Certificates, if delivered correctly, inspire trust among Internet traders. It’s not, however, as easy as it might sound.
Certificates expire, are missing, are issued to the wrong person, or have to be revoked because the detail held on the certificate is wrong–maybe the people key number was threatened–and this leads to a large Certificate Control effort or even a campaign.

The X.509 v3 certificate format is a standard used for public important certificates and is broadly used by Internet security protocols (like SHTTP). Based on X.509 v3, digital certificates are being used increasingly as electronic credentials for identification, non- repudiation, and even authorization, when making payments and conducting other business transactions on the Internet or corporate Intranets.

Just as within our credit card system of today, where millions of credit card numbers issued by any bank in the world are electronically confirmed, so it will be the use of digital certificates will demand a clearing house network for certificate confirmation of a comparable scale.

Single proxies or proxy arrays?

If you’re working in a small business or network then this issue will probably never arise. However with the growth of the internet and web enabled devices and clients it’s an issue that will almost certainly effect most network administrators. Do we just keep adding an extra proxy to expand capability and bandwidth or should you install an array.

Nevertheless the solution can be dependent on a variety of external factors. for example in the event the corporation is concentrated in a single location, just one level of proxies is a better solution. This reduces the latency as there’s only a single additional hop added by proxies, as opposed to two or more with tree structured proxy hierarchies.

Although the general rule would be to have one proxy server for every 5000 (possible, not simultaneous) users, it doesn’t automatically mean that a company with 9000 users should have 3 departmental proxies, that are then chained to some most important proxy.

Instead, the 3 proxies might be installed in parallel, using Cache Array Routing Protocol (CARP) or another hashbased proxy selection mechanism. Larger corporations with in-house programming skills may have resource to create custom solutions too which work better to a specific environment which perhaps incorporates remote VPN access to the network too. For example many larger environments have different levels of security in place and have various zones which need to be isolated, generic ‘serve all’ proxies can be a significant security issue in these environments.

This approach can also combine multiple physical proxy caches into a single logical one. Ln general, such clustering of all proxies is recommended as it increases the effective cache size and eliminates redundancy between individual proxy caches. Three proxies, each with a 4 gigabyte cache, would give an efficient 12 gigabytes of cache when put up in parallel,as opposed to only about 4GB if used individually.

Generally, some quantity of parallelization of proxies into arrays is obviously desired. Nevertheless, the network layout might dictate that depart psychological proxies be utilized. That is, it is not feasible to have all of the trafc originating from the entire company go through one array of proxies. It can cause the entire array to become a 1/ O bottleneck, even when the individual proxies of the variety have been in individual subnets. The load created by the users can be so high that the subnets leading to the proxies may choke. To alleviate this, some departmental proxies need to be deployed closer to the end customers, in order that a number of the traffic created by the users will not reach the main proxy array.

Failover? Since proxies are a centralized point of traffic it’s vitally important that there is a system in place for failover. If a proxy goes down, users will instantly lose their access to the internet. What’s more it may be that many important applications rely on permanent internet access to keep running. They might need access to central database systems or perhaps need frequent updates or security patches. In any ways, internet access is often much more crucial than simply the admin office being able to use Amazon, surf UK TV abroad or check the TV schedules online.

Failover might be achieved in many various ways. There are (relatively expensive) hardware solutions which transparently change to a hot standby system in the event the primary system goes down.  You can usually choose between configuration and restore scenarios, there’s the choice to invest in residential ips proxies too.

Nevertheless, proxy autoconfiguration and CARP provide more cost effective failover support. During the time of this writing, there are a couple areas in customer failover sup port which might be improved. Users have a Propensity to detect a intermediate proxy server going down by seeing fairly long delays, and possibly error messages. A proper proxy back up system should be virtually seamless and provide similar levels of speed and bandwidth than the primary system.

Security and Perfomance – Monitoring User Activity

When analysing your server’s overall performance and functionality one of the key areas to consider is that of user activity.  Looking for unusual user activity is a sensible option in identifying potential system problems or security issues.  When a server log is full of unusual user activity you can often use this information to track down the potential issues very quickly.  For example by analysing these issues from your system logs then you can often identify trends in authentication, security problems and application errors.

Monitoring user access to a system for example will allow you to determine usage trends such as utilization peaks.   Often these can cause many sorts of issues, from authentication problems to very specific application errors.  All of this data will be stored in different logs depending on what systems you are using, certainly most operating systems will record much of this by default.

Using system logs though can be difficult due to the huge amount of information in them. It is often difficult to determine which is relevant to the health and security of your servers.  even benign behaviour can look suspicious to the untrained eye and it is important to use tools to  help filter out some of the information into more readable forms.

For example if you see a particular user having authentication problems every week or so, then it is likely that they are just having problems remembering their passwords.   However if you see a user repeatedly failing authentication over a shorter period of time, it may illustrate some other issues.  For example if the user is trying to access the external network using a German proxy server then there would be an authentication problem as the server would not be trusted.

Looking at issues like this can help determine user activity that causes a security breach.  Obviously it is important to be aware of the current security infrastructure in order to interpret the results in these logs correctly.   Most operating systems like Unix and Windows allow you to configure the reports to record different levels of information ranging from brief to verbose.

If you do set logs to record verbose information it is advisable to use some sort of program to help analyse the information efficiently.  There are many different applications which can allow you to do this, although some of them can be quite expensive.  There are simpler and cheaper options though, for example the Microsoft Log Parser is a free tool which allows you to run queries against event data in a variety of formats.

Log parser is particularly useful for analysing security events, which are obviously the key priority for most IT departments in the current climate.    These security and user authentication logs are the best way to determine whether any unusual activity is happening on your network.  For example anyone using an stealth VPN or IP Cloaker like this one, will be very difficult to detect by looking at raw data from the wire.  However it is very likely some user authentication errors will be thrown up from using an external server like this.  For instance most networks restrict access to predetermined users or ip address ranges and these errors can flag up behaviour very quickly.

No Comments Networking, Protocols, VPNs

Code Signing – How it Works

How do you think that users and computers can trust all this random software which appears on large public networks?  I am of course referring to the internet and the requirement most of us have to download and run software or apps on a routine basis.  How can we trust that this is legitimate software and not some shell of a program just designed to infect our PC or steal our data?  After all even if we avoid most software, everyone needs to install driver updates and security patches.

The solution generally involves something called code signing which allows companies to assure the quality and content of any file released over the internet.    The software is signed by a certificate and as long as you trust the certificate and it’s issuer then you should  be happy to install the associated software.    Code signing is used by most major distributors in order to ensure the quality of released software online.

Code Signing – the Basics
Coed signing simply adds a small digital signature to a program, an executable file, an active X control, DLL (dynamic link library) or even a simple script or java applet. The crucial fact is that this signature seeks to protect the user of this software in two ways:

Digital signature identified the publisher, ensuring you know exactly who wrote the program before you install it.

Digital signature allows you to determine whether the code you are looking to install is the same as that was released. It also helps to identify what if any changes have been made subsequently.

Obviously if the application is aware of code signing this makes it even simpler to use and more secure. These programs can be configured to interact with signed/unsigned software depending on particular circumstances. One simple example of this are the security zones defined in Internet Explorer. They can be configured to control how each application interacts depending on what zone they are in. There can be different rules for ‘signed’ and ‘unsigned’ applications for instance with obviously more rights assigned to the ‘signed’ applications.

In secure environments you can assume that any ‘unsigned’ application is potentially dangerous and apply restrictions accordingly. Most web browsers have the ability to determine the difference between these applications and assign security rights depending on the status. It should be noted that these will be applied through any sort of connection or access, even a connection from a live VPN to watch the BBC!

This is not restricted to applications that operate through a browser, you can assign and control activity of signed and unsigned applications in other areas too.  Take for instance device drivers, it is arguably even more important that these are validated before being installed.  You can define specific GPO settings in a windows environment to control the operation and the installation of a device driver based on this criteria.  These can also be filtered under a few conditions, for example specifying the proxy relating to residential ip addresses.

As well as installation it can control how Windows interacts with these drivers too,  although generally for most networks you should not allow installation of an unsigned driver.  This is not always possible though, sometimes application or specialised hardware will need device drivers where the company hasn’t been able to sign the code satisfactorily.   In these instance you should consider carefully before installing and consider the source too. For example if you have downloaded from a reputable site using a high anonymous proxies to  protect your identity then that might be safer than a random download from an insecure site, there is still a risk though.

Preparing PKI in a Windows Active Directory Environment

If you’re installing and implementing internet access for an internal windows based network then there’s two important factors you should consider.  Firstly  it’s important to ensure that your perimeter is protected and access is only allowed through a single point.  This might seem trivial but it’s actually crucial to ensure that the network can be controlled.  Any network which has thousands of individual clients accessing the internet directly and not through a proxy is going to be almost impossible to protect.

The second aspect relates to the overall client and server security – ensure that your windows environment has the Active directory enabled.  This will also allow you to implement the Microsoft Windows PKI.   From Windows 2003 onwards this is already included and PKI is preconfigured in the Windows 2003 schema whether you wist to implement it or not.

If you are considering using Windows PKI then remember although the active directory is a pre-requisite for a straightforward installation, it does not require a domain functional level or even a functioning forest to operate in.   In fact the only configuration you require in the later versions of Windows is to change the Cert Publishers group which is needed in any multi-domain.  This group is pre-populated as a domain local group in each domain in an Active directory forest by default.

This is how PKI is implemented, you can allow any enterprise level certificate authority (CA) the rights to publish certificates to any user object in the current forest or to the  Contact  object in foreign forests.   Remember to enable the relative permissions by adding the CA’s computer account to each domain’s Cert Publisher group.  This is essential as the scope of this group has changed from a global group to a domain local group, but this allows the group to include members of the computer accounts from outside the domain.  This means that you can add computers and user groups for external access by including an external gateway.  For example if you wanted to proxy BBC streams and cache them you could include the proxy server in this group in order to minimize authentication traffic.

You are unable to currently deploy the Windows Server Enterprise CAs in Non- Active Directory environments. This is because the Certificate Authority requires the existence of the AD in order to store configuration information and certificate publishing.  You can install Windows Server PKI in a non-AD environment , however each CA in the PKI hierarchy must be standalone.  This is workable in smaller environments but can be a real challenge to configure communications in large or distributed networks across many network subnets.  Trying to ensure that the right Certificate Authority is assigned across a multinational network is difficult without the Active Directory.  Remember you may have clients and servers requesting authentication from different networks in a UK company you might have a client desktop with an Irish IP address seeking authentication from a London based standalone CA in a different domain.

 

Securing the Internal Network

Twenty years ago this wasn’t really much of an issue, a simple network, a couple of file servers and if you were luck an email system.   Security was never much of an issue, which was just as well because sometimes there wasn’t much you could do anyway.  If anyone remembers the forerunner of Microsoft Exchange – the Microsoft Mail post offices were installed in open shares and if you started locking them down everything stopped working.   You could make some minor security implementations but most of all you had to be careful that you didn’t leave anything in these open shares.

Of course, Unix, Ultrix and the forerunner of Windows NT all had reasonable levels of security and you could apply decent access controls based on users, groups and domains without too much issue.  It was more the applications that were the issue, security in a digital environment was very much in it’s infancy.  Nowadays of course, everyone takes security much more seriously in this age of data protection, hackers, viruses and cyber criminal attacks all over the place.  It’s still a nightmare to lock down environments though and that’s primarily due to the internet.

IT departments all over the world love the internet, solving issues and fixing problems is made a hundred times easier with a search engine at hand.  However that’s one side of the coin, the other is the fact that access to the internet makes configuration and security much more important and potentially more challenging.  Imagine every single desktop has the capacity to visit, download and distribute any number of malevolent files.   A potential virus outbreak sits on everybody’s desk and when you look at some of the users you could only be scared.

So what sort of methods do we have to minimize the potential chaos to our internal network.  Well first of all there’s something not that technology based, a document which details how people must use their computers and especially the internet.  Making sure that users are educated about the risks to both the network and their employment status is probably the most important step you can take to reduce risk from outside sources.   If they no that they could get fired for downloading or streaming video from sites like the BBC via their company VPN then they’re much likely to do it.

There’s still a need to implement access control lists and secure resources of course but user compliance goes a long way.  Principles like giving user the least amount of permissions makes sense in securing resources.  You can lock down both PCs, browsers and external access through Windows environments and GPO (Group Policy Objects).  Routing all internet access through central points is a sensible option, meaning not only can you control but also monitor internet traffic in both ways.  This is also a useful way of applying a second layer of security as regards Antivirus – scanning before it reaches your desktop solutions.

Most secure environment also put in other common sense steps like not allowing users to plug in their own hardware onto the network.  This sounds a trivial matter but can effectively bypass your whole security infrastructure if a virus ridden laptop is installed on your internal network.    You have no control over what that their hardware is used for, they may be downloading torrents and buying alcohol/drugs from the darkweb when they get home.   Ensuring data security can also be managed by ensuring that no-one uses or takes away data using USB sticks and memory cards.  There are security settings and applications which can manage these devices quite easily now, also using group policy if you’re running a windows environment and have implemented the active directory

No Comments Networking, Protocols, VPNs