So what’s a Digital Certificate?

We’ve probably all seen those simple diagrams where an electronic signature authenticates the key pair used to create the signature. For electronic commerce, authenticating a key pair might not be adequate. For business transactions, each key pair needs to be closely bound to the consumer that owns the key pair. An electronic certificate is a credential that contrasts an integral pair to the entity that owns the key pair. Digital certificates are issued by certification authorities,
therefore we trust the binding prescribed with the certificate.

A Digital signature is fine for verifying e-mail, but stronger verification methods are needed to associate an individual, like the demonstration in our earlier post where we used it to allow access to an app for watching the BBC News abroad. to the binary bits on the network that are purporting to “belong” to Tom Smith. For electronic commerce to work, the association has to be of a power that is legally binding. When Tom Smith has an electronic certificate to advertise to the planet at large, he is in possession of something which might take more trust than the “seal” that was made by his own digital signature.

You might trust his digital signature, but what if a few other believed authority had trust in Tom Smith?
Wouldn’t you then trust Tom Smith a little more? A digital certificate is given by an organization that has a reputation to defend. This organization, known as the certificate authority (CA), may be Tom’s employers, an independent organization, or the government. The CA will take measures to set some truths about Torn Smith before issuing a certificate because of him.

The certificate will normally hold Tom’s name, his public key number, the serial number of the certificate itself, and validity dates (issue and expiry). It’ll also bear the name of the issuing CA. The whole certificate is digitally signed by the CA’s own private key.

Lastly we’ve achieved a mechanism which may be used to allow individuals who’ve no previous relationship to set each other’s identity and participate in the legal transactions of electronic commerce. It’s certainly more efficient and secure than using something like geo-location which simply determines your identity based on your location. So for example, a web site might determine nationality by using your network address – e.g a British IP address needed to access the BBC online.

Certificates, if delivered correctly, inspire trust among Internet traders. It’s not, however, as easy as it might sound.
Certificates expire, are missing, are issued to the wrong person, or have to be revoked because the detail held on the certificate is wrong–maybe the people key number was threatened–and this leads to a large Certificate Control effort or even a campaign.

The X.509 v3 certificate format is a standard used for public important certificates and is broadly used by Internet security protocols (like SHTTP). Based on X.509 v3, digital certificates are being used increasingly as electronic credentials for identification, non- repudiation, and even authorization, when making payments and conducting other business transactions on the Internet or corporate Intranets.

Just as within our credit card system of today, where millions of credit card numbers issued by any bank in the world are electronically confirmed, so it will be the use of digital certificates will demand a clearing house network for certificate confirmation of a comparable scale.

Single proxies or proxy arrays?

If you’re working in a small business or network then this issue will probably never arise. However with the growth of the internet and web enabled devices and clients it’s an issue that will almost certainly effect most network administrators. Do we just keep adding an extra proxy to expand capability and bandwidth or should you install an array.

Nevertheless the solution can be dependent on a variety of external factors. for example in the event the corporation is concentrated in a single location, just one level of proxies is a better solution. This reduces the latency as there’s only a single additional hop added by proxies, as opposed to two or more with tree structured proxy hierarchies.

Although the general rule would be to have one proxy server for every 5000 (possible, not simultaneous) users, it doesn’t automatically mean that a company with 9000 users should have 3 departmental proxies, that are then chained to some most important proxy.

Instead, the 3 proxies might be installed in parallel, using Cache Array Routing Protocol (CARP) or another hashbased proxy selection mechanism. Larger corporations with in-house programming skills may have resource to create custom solutions too which work better to a specific environment which perhaps incorporates remote VPN access to the network too. For example many larger environments have different levels of security in place and have various zones which need to be isolated, generic ‘serve all’ proxies can be a significant security issue in these environments.

This approach can also combine multiple physical proxy caches into a single logical one. Ln general, such clustering of all proxies is recommended as it increases the effective cache size and eliminates redundancy between individual proxy caches. Three proxies, each with a 4 gigabyte cache, would give an efficient 12 gigabytes of cache when put up in parallel,as opposed to only about 4GB if used individually.

Generally, some quantity of parallelization of proxies into arrays is obviously desired. Nevertheless, the network layout might dictate that depart psychological proxies be utilized. That is, it is not feasible to have all of the trafc originating from the entire company go through one array of proxies. It can cause the entire array to become a 1/ O bottleneck, even when the individual proxies of the variety have been in individual subnets. The load created by the users can be so high that the subnets leading to the proxies may choke. To alleviate this, some departmental proxies need to be deployed closer to the end customers, in order that a number of the traffic created by the users will not reach the main proxy array.

Failover? Since proxies are a centralized point of traffic it’s vitally important that there is a system in place for failover. If a proxy goes down, users will instantly lose their access to the internet. What’s more it may be that many important applications rely on permanent internet access to keep running. They might need access to central database systems or perhaps need frequent updates or security patches. In any ways, internet access is often much more crucial than simply the admin office being able to use Amazon, surf UK TV abroad or check the TV schedules online.

Failover might be achieved in many various ways. There are (relatively expensive) hardware solutions which transparently change to a hot standby system in the event the primary system goes down.  You can usually choose between configuration and restore scenarios, there’s the choice to invest in residential ips proxies too.

Nevertheless, proxy autoconfiguration and CARP provide more cost effective failover support. During the time of this writing, there are a couple areas in customer failover sup port which might be improved. Users have a Propensity to detect a intermediate proxy server going down by seeing fairly long delays, and possibly error messages. A proper proxy back up system should be virtually seamless and provide similar levels of speed and bandwidth than the primary system.

Security and Perfomance – Monitoring User Activity

When analysing your server’s overall performance and functionality one of the key areas to consider is that of user activity.  Looking for unusual user activity is a sensible option in identifying potential system problems or security issues.  When a server log is full of unusual user activity you can often use this information to track down the potential issues very quickly.  For example by analysing these issues from your system logs then you can often identify trends in authentication, security problems and application errors.

Monitoring user access to a system for example will allow you to determine usage trends such as utilization peaks.   Often these can cause many sorts of issues, from authentication problems to very specific application errors.  All of this data will be stored in different logs depending on what systems you are using, certainly most operating systems will record much of this by default.

Using system logs though can be difficult due to the huge amount of information in them. It is often difficult to determine which is relevant to the health and security of your servers.  even benign behaviour can look suspicious to the untrained eye and it is important to use tools to  help filter out some of the information into more readable forms.

For example if you see a particular user having authentication problems every week or so, then it is likely that they are just having problems remembering their passwords.   However if you see a user repeatedly failing authentication over a shorter period of time, it may illustrate some other issues.  For example if the user is trying to access the external network using a German proxy server then there would be an authentication problem as the server would not be trusted.

Looking at issues like this can help determine user activity that causes a security breach.  Obviously it is important to be aware of the current security infrastructure in order to interpret the results in these logs correctly.   Most operating systems like Unix and Windows allow you to configure the reports to record different levels of information ranging from brief to verbose.

If you do set logs to record verbose information it is advisable to use some sort of program to help analyse the information efficiently.  There are many different applications which can allow you to do this, although some of them can be quite expensive.  There are simpler and cheaper options though, for example the Microsoft Log Parser is a free tool which allows you to run queries against event data in a variety of formats.

Log parser is particularly useful for analysing security events, which are obviously the key priority for most IT departments in the current climate.    These security and user authentication logs are the best way to determine whether any unusual activity is happening on your network.  For example anyone using an stealth VPN or IP Cloaker like this one, will be very difficult to detect by looking at raw data from the wire.  However it is very likely some user authentication errors will be thrown up from using an external server like this.  For instance most networks restrict access to predetermined users or ip address ranges and these errors can flag up behaviour very quickly.

No Comments Networking, Protocols, VPNs

Code Signing – How it Works

How do you think that users and computers can trust all this random software which appears on large public networks?  I am of course referring to the internet and the requirement most of us have to download and run software or apps on a routine basis.  How can we trust that this is legitimate software and not some shell of a program just designed to infect our PC or steal our data?  After all even if we avoid most software, everyone needs to install driver updates and security patches.

The solution generally involves something called code signing which allows companies to assure the quality and content of any file released over the internet.    The software is signed by a certificate and as long as you trust the certificate and it’s issuer then you should  be happy to install the associated software.    Code signing is used by most major distributors in order to ensure the quality of released software online.

Code Signing – the Basics
Coed signing simply adds a small digital signature to a program, an executable file, an active X control, DLL (dynamic link library) or even a simple script or java applet. The crucial fact is that this signature seeks to protect the user of this software in two ways:

Digital signature identified the publisher, ensuring you know exactly who wrote the program before you install it.

Digital signature allows you to determine whether the code you are looking to install is the same as that was released. It also helps to identify what if any changes have been made subsequently.

Obviously if the application is aware of code signing this makes it even simpler to use and more secure. These programs can be configured to interact with signed/unsigned software depending on particular circumstances. One simple example of this are the security zones defined in Internet Explorer. They can be configured to control how each application interacts depending on what zone they are in. There can be different rules for ‘signed’ and ‘unsigned’ applications for instance with obviously more rights assigned to the ‘signed’ applications.

In secure environments you can assume that any ‘unsigned’ application is potentially dangerous and apply restrictions accordingly. Most web browsers have the ability to determine the difference between these applications and assign security rights depending on the status. It should be noted that these will be applied through any sort of connection or access, even a connection from a live VPN to watch the BBC!

This is not restricted to applications that operate through a browser, you can assign and control activity of signed and unsigned applications in other areas too.  Take for instance device drivers, it is arguably even more important that these are validated before being installed.  You can define specific GPO settings in a windows environment to control the operation and the installation of a device driver based on this criteria.  These can also be filtered under a few conditions, for example specifying the proxy relating to residential ip addresses.

As well as installation it can control how Windows interacts with these drivers too,  although generally for most networks you should not allow installation of an unsigned driver.  This is not always possible though, sometimes application or specialised hardware will need device drivers where the company hasn’t been able to sign the code satisfactorily.   In these instance you should consider carefully before installing and consider the source too. For example if you have downloaded from a reputable site using a high anonymous proxies to  protect your identity then that might be safer than a random download from an insecure site, there is still a risk though.

Preparing PKI in a Windows Active Directory Environment

If you’re installing and implementing internet access for an internal windows based network then there’s two important factors you should consider.  Firstly  it’s important to ensure that your perimeter is protected and access is only allowed through a single point.  This might seem trivial but it’s actually crucial to ensure that the network can be controlled.  Any network which has thousands of individual clients accessing the internet directly and not through a proxy is going to be almost impossible to protect.

The second aspect relates to the overall client and server security – ensure that your windows environment has the Active directory enabled.  This will also allow you to implement the Microsoft Windows PKI.   From Windows 2003 onwards this is already included and PKI is preconfigured in the Windows 2003 schema whether you wist to implement it or not.

If you are considering using Windows PKI then remember although the active directory is a pre-requisite for a straightforward installation, it does not require a domain functional level or even a functioning forest to operate in.   In fact the only configuration you require in the later versions of Windows is to change the Cert Publishers group which is needed in any multi-domain.  This group is pre-populated as a domain local group in each domain in an Active directory forest by default.

This is how PKI is implemented, you can allow any enterprise level certificate authority (CA) the rights to publish certificates to any user object in the current forest or to the  Contact  object in foreign forests.   Remember to enable the relative permissions by adding the CA’s computer account to each domain’s Cert Publisher group.  This is essential as the scope of this group has changed from a global group to a domain local group, but this allows the group to include members of the computer accounts from outside the domain.  This means that you can add computers and user groups for external access by including an external gateway.  For example if you wanted to proxy BBC streams and cache them you could include the proxy server in this group in order to minimize authentication traffic.

You are unable to currently deploy the Windows Server Enterprise CAs in Non- Active Directory environments. This is because the Certificate Authority requires the existence of the AD in order to store configuration information and certificate publishing.  You can install Windows Server PKI in a non-AD environment , however each CA in the PKI hierarchy must be standalone.  This is workable in smaller environments but can be a real challenge to configure communications in large or distributed networks across many network subnets.  Trying to ensure that the right Certificate Authority is assigned across a multinational network is difficult without the Active Directory.  Remember you may have clients and servers requesting authentication from different networks in a UK company you might have a client desktop with an Irish IP address seeking authentication from a London based standalone CA in a different domain.

 

Securing the Internal Network

Twenty years ago this wasn’t really much of an issue, a simple network, a couple of file servers and if you were luck an email system.   Security was never much of an issue, which was just as well because sometimes there wasn’t much you could do anyway.  If anyone remembers the forerunner of Microsoft Exchange – the Microsoft Mail post offices were installed in open shares and if you started locking them down everything stopped working.   You could make some minor security implementations but most of all you had to be careful that you didn’t leave anything in these open shares.

Of course, Unix, Ultrix and the forerunner of Windows NT all had reasonable levels of security and you could apply decent access controls based on users, groups and domains without too much issue.  It was more the applications that were the issue, security in a digital environment was very much in it’s infancy.  Nowadays of course, everyone takes security much more seriously in this age of data protection, hackers, viruses and cyber criminal attacks all over the place.  It’s still a nightmare to lock down environments though and that’s primarily due to the internet.

IT departments all over the world love the internet, solving issues and fixing problems is made a hundred times easier with a search engine at hand.  However that’s one side of the coin, the other is the fact that access to the internet makes configuration and security much more important and potentially more challenging.  Imagine every single desktop has the capacity to visit, download and distribute any number of malevolent files.   A potential virus outbreak sits on everybody’s desk and when you look at some of the users you could only be scared.

So what sort of methods do we have to minimize the potential chaos to our internal network.  Well first of all there’s something not that technology based, a document which details how people must use their computers and especially the internet.  Making sure that users are educated about the risks to both the network and their employment status is probably the most important step you can take to reduce risk from outside sources.   If they no that they could get fired for downloading or streaming video from sites like the BBC via their company VPN then they’re much likely to do it.

There’s still a need to implement access control lists and secure resources of course but user compliance goes a long way.  Principles like giving user the least amount of permissions makes sense in securing resources.  You can lock down both PCs, browsers and external access through Windows environments and GPO (Group Policy Objects).  Routing all internet access through central points is a sensible option, meaning not only can you control but also monitor internet traffic in both ways.  This is also a useful way of applying a second layer of security as regards Antivirus – scanning before it reaches your desktop solutions.

Most secure environment also put in other common sense steps like not allowing users to plug in their own hardware onto the network.  This sounds a trivial matter but can effectively bypass your whole security infrastructure if a virus ridden laptop is installed on your internal network.    You have no control over what that their hardware is used for, they may be downloading torrents and buying alcohol/drugs from the darkweb when they get home.   Ensuring data security can also be managed by ensuring that no-one uses or takes away data using USB sticks and memory cards.  There are security settings and applications which can manage these devices quite easily now, also using group policy if you’re running a windows environment and have implemented the active directory

No Comments Networking, Protocols, VPNs

Implementing your Internet Security Policy

One of the problems with IT department is that they can often be a little bit detached from the rest of an organisation. Many are even physically separated, perhaps stuck in a separate building or floor which only helps increase the isolation. In many ways it’s not a problem after all, it’s a department which will probably need more space and room for storage of parts, replacements etc. Commonly the IT department will have easy access to server rooms so that they can maintain and support when those remote connections drop.

However one of the issues is that people who work in IT often see the rest of the company through their IT usage and not through their real function. This can be a problem with how people use technology and how it is managed throughout the company.

The classic example is that of internet usage, which over the last decade or so has become one of the main issues to manage in any IT department. First of all there are the technical complexities of allowing company clients to access outside resources. Then there are the potential security risks of viruses, hacking attempts, inappropriate browsing, email security, spam and so on. Access to the internet is now fairly commonplace but it almost always puts a huge strain on both technical and human resources to support.

For example many users will use the internet just as they do at home? Downloading BBC videos like this, visiting shopping sites, hobbies, research and all sorts of things which can impact the local network. It doesn’t take many users streaming video to their PCs to have a huge slowdown on many normal company networks which are rarely configured to cope with this sort of traffic. Yet how do you stop them? Many IT departments I have seen over the years simply block access, a few rules in the firewall will stop all access to a particular site. However this is obviously not the way to do this, a technical solution should not be implemented on it’s own.

A company should have an Internet Usage Policy to cover situations like this. Without stating clearly what employees can or can’t do online leaves the company and Human Resource departments on very thin ice. That user who spends all day streaming from Netflix or visiting porn sites is clearly not doing their job but it’s difficult to discipline without clear guidelines in such a policy or in their terms of employment. Having a proper internet policy is much simpler as it can be adapted quickly, can be referenced from other policies and things like employee guidelines. Also the policy can be directly linked to technical solutions like a proper access control list.

If guidelines are in place, you mostly won’t have to spend time chasing and blocking video and media sites individually like Netflix or the BBC iPlayer. If employees know that they are not able to use these sites and the reasons behind them generally the problem is resolved first. There may be issues with more technical users who attempt to circumvent or hide their activities perhaps using an online IP changer but there people are easier to deal with if they are directly contravening company policies.

No Comments Networking, News, Protocols

Issues on Blocking VPN Access from Networks

People love using VPNs for a variety of reasons but if you’re the administrator of any network they can be a real problem. Of course, the primary function of a VPN is security and if users simply used the VPN to encrypt and secure their data then that would be fine. However in reality what you’ll really find is users connecting through a VPN in order to bypass blocks or access sites normally restricted by your network rules. Using a VPN service watch UK TV is a common issue in our US/European network.

The problem is that these sites and activities are blocked for a reason. Having twenty people streaming the latest episode of ‘Strictly’ over the companies network uses the same bandwidth as about a 100 ordinary users simply working. It doesn’t matter that the traffic is being carried over the VPN it still uses our own bandwidth to deliver to the client. So it’s hardly surprising that we need to restrict the use of these VPN clients and the issues they cause. Here’s an example of what people can use these VPN services to do and the problems we can have in blocking them –

As you can see in this particular VPN service called Identity Cloaker there are lots of configuration options which can be used to hide the use of the service. Most of the recommended measures rely on blocking the standard footprints of a VPN service, but as you can see when you are able to switch outgoing ports and create a non-standard configuration it becomes much harder.

There is little in the data you can pick up on so those content filters are pretty much useless. The problem here is that most VPNs are encrypted so that even the destination address is encrypted (although obviously not the IP address). It’s simple to block the web based proxies and VPN services simply by restricting access to their URLs but these clients are much more difficult.

As you can see most services usually have the option to switch between hundreds of different IP addresses even doing so automatically. This is another way you can identify a simple proxy or VPN looking for consistent traffic patterns and single IP addresses. Filtering access to a VPN service which automatically switches server and IP address every few minutes is extremely difficult. Unless they do something with a distinct pattern and very heavy usage like anonymous torrenting then any footprint is almost impossible to detect.

Most administrators usually adopt an attitude of blocking the simplest VPN access and leaving it at that. The reality is that a technical user who is using a sophisticated VPN service like Identity Cloaker is going to be very difficult to stop. You should rely on enforcing User policies within the network and stressing the penalties if people are found using such services.

One other method to consider is ensuring that most users are not able to install or configure the VPN clients on their local laptops or computers. These can normally enforced very easily particularly in Windows environments. Simply configure local user policy and apply restrictive Group Policy settings to remove admin access to users. Unfortunately programs like Identity Cloaker also come with a ‘lite’ version which don’t need installing and can be run directly from a single executable. It can even be run from a memory stick and still interact with the network stack on the local computer.

Network Layer Switches

Network switches play a critical role in the performance of local area networks. They may be used in private networks like the intranet and extranet, segmenting the networks into more manageable sections. The resulting networks are known as HFC, please see the glossary for definitions. Setting up a sizable computer network can be an intimidating undertaking and one needs an in-depth understanding of the role of every networking device to construct an efficient network. Thus, it’s accountable for setting up the essential network for transferring data from 1 user to other. In truth, it is the largest SDH-based transport network on earth. It establishes a relation to the device by choosing the essential service or application.

Packet routing is extremely essential task in order to prevent congestion. When a data packet would like to reach a specific destination, it must traverse through these networks. The file transfer protocol supplies a way to move data efficiently from 1 machine to another. Routing protocols transmit information concerning the network. Most routing protocols do not consist of layer two information that’s necessary to set up a VCC connection. It’s an unreliable, connectionless protocol for applications which do not want TCP’s sequencing on flow control and want to provide their own. In large, complex networks servers need access to this sort of throughput – imagine the strain on something like Netflix IP servers broadcasting video to millions.

Every computer online or a local network becomes assigned an exceptional address commonly called Internet Protocol address or simply the IP address. It is not just a vast array of computers, connected to each other. You may also browse the internet for articles, discussions and suggestions. Optical communication links and networks are crucial for the online backbone along with for interconnects utilised in data centres and high-performance computing systems.

While doing this, it must manage problems like network congestion, switching issues, etc.. It can help you comprehend the working of a network in an easy and quick method. Many times, once an application would like to communicate with a different application, then there must be communication between these associated processes. Working of the web is based on a collection of protocols. To have the ability to find that massive network to work and get our LANs to act jointly there has to be a routing protocol that enables it. It uses TCP at the transport layer again to find out the reliability.

If you have a relatively new mobile handset, then it’s most inclined to be equipped with an integrated web browser. It selects device and execute a service discovery to look for available services or applications. Bluetooth devices operate in a variety of about ten meters. It functions as an intermediary between wireless and wired devices which are part of a network. Aside from the computers themselves, there are numerous intermediary devices which make data transfer possible. It can also allow a network to detect, reroute or simply block specific types of transport presumably it is how the BBC has blocked VPNs like this story details.

Window flow control mechanisms weren’t modeled, so as to extend the reach of the study to congestion collapse regions. After you prepare the export feature, NetFlow information is exported whenever a flow expires. The principal use of the router is to ascertain the very best network path in a complicated network. The third main purpose of LAN switches is Layer two loop avoidance. Besides this, the gateway functionality has to be enabled. Each P-NET module also has to have a service channel that may identify unknown participants.

Computer Security: Phishing

Out of all the weapons available to a cyber criminal, phishing is probably one of the most widely used. It is generally described as a random, un-targeted attack with the intention of tricking someone into revealing confidential information by replying to an email, clicking a link or filling in a bogus webpage. Most of the popular phishing attacks rely on an element of social engineering. That is deceiving people into gaining access rather than directly hacking into a target system.

Usually the main delivery mechanism is via email and using modern mailing systems they can target millions of email addresses at one time. There are many variations of the phishing attacks ranging from installing keyloggers, duplicate websites or similar. The intent is always to steal personal information such as username, passwords and account numbers.

It is fairly common for these phishing emails to include attachments or links that can install various types of malware onto the victims computer in order to steal their information too.

Quick Summary of Phishing Attacks

There are as explained lots of different types of Phishing attacks and their popularity changes quite regularly.

Email Phishing – is probably the most well known and centers around mass distributions of emails, they are very random and usually rely on volume to succeed.

Spear Phishing – is a more targeted term for phishing which follows the basic premises. However they are usually more sophisticated and tailored towards a certain type of user or organisation.

Man in the Middle (MiTM) attacks involve the attacker positioning themselves between a legitimate website or company and the end user, the goal is to record any information sent. It\’s normally one of the most difficult to operate but also to detect as the transactions are normally legitimate but simply intercepted.

There are many other methods available to capture information with things like keyloggers and screen capture programs popular too, the ideas are always to simply gain passwords or other personal information.

Some other variants include pharming which is even less targeted than phishing just installing malicious code onto servers to redirects any user to fake websites. There are various methods of doing this including several involving DNS like modifying a users host file to redirect them without their knowledge. A particularly sinister version of pharming is known as DNS (Domain Name System) poisoning where users are directed to fraudulent websites without the need for corruption of the personal host file.  Others use legitimate or at least semi-legitimate services to trick people to using them.  One of the more popular methods was to put free proxy servers out on the internet for people to bypass region blocks, these were then used to steal peoples credentials as they were using them.   This explains the method of region lock bypass using a proxy to watch the BBC although the example used in the post was a commercial service.

Malware Phishing – Is the process of download malware on a users’ device either through an attachment in an email, a downloadable web file or exploiting software vulnerabilities.

Further Reading – Security Information and UK VPN trial