Protecting Your Network from DoS Attacks

Most network administrators who run web facing servers will spend a lot of their time defending, protecting and patching against network attacks. They can be extremely time consuming to combat and some of the worst to deal with are called denial of service attacks. Although these are usually relatively primitive attacks the problem is that they are easy to orchestrate and very difficult to trace back to the originator. One of the biggest problems is that the attacker rarely needs a valid connection to it’s victim which makes finding the source very difficult indeed.

A Denial of Service (DOS) attack is actually any type of attack which disrupts the operation of a computer in order that genuine individuals can no longer gain access to it. DoS attacks are achievable on most network equipment, including switches, hosting servers, firewalls, remote access computers, as well as just about every other network resource.A DoS attack can be specific to a service, for example, in an FTP attack, or an entire machine.The forms of DoS are varied and wide ranging, but they can be split into 2 distinct classifications that connect to intrusion detection: resource depletion and malicious packet attacks.

Malicious packet DoS attacks work by sending out abnormal traffic to a host in order to bring about the service or the host itself to crash. Crafted packet DoS attacks take place when computer software is not correctly coded to take care of uncommon or unusual traffic. Often out-of– specification traffic can cause computer software to behave unexpectedly and crash. Attackers can utilize crafted packet DoS attacks in order to bring down IDSs, even Snort.A specifically crafted tiny ICMP packet using a size of 1 was discovered to cause Snort v. 1.8.3 to core dump. This particular version of Snort did not actually properly define the minimum ICMP header size, which made it possible for the DoS to happen.

These attacks will commonly use hijacked computers to launch from, it’s relatively easy to build up a large network of compromised computers and there are also networks available for hire. These computers can obviously be traced but the owners are usually unaware of the role their servers or PC have undertaken. Additionally skilled attackers will use a network of proxies and VPNs hidden behind residential IP address providers or VPNs such as described in this post.

Along with out of spec traffic, malicious packets can contain payloads which cause a system to crash. A packet’s payload is actually taken as input right into a service. If the input is not actually appropriately assessed, the application can be DoSed. The Microsoft FTP DoS attack demonstrates the wide variety of DoS attacks easily available to black hats in the wild.The initial step in the attack is actually to trigger a genuine FTP connection.The attacker would most likely then issue a command together with a wildcard sequence (such as * or?). Within the FTP Server, a feature that handles wildcard routines in FTP commands does not assign sufficient memory when executing pattern matching. It is actually possible for the attackers command incorporating a wildcard pattern to cause the FTP service to crash.This DoS, as well as the Snort ICMP DoS, are two illustrations of the many thousands of conceivable DoS attacks easily available.

The additional method to deny service is via resource depletion.A resource depletion DOS attack functions by saturating a service with a great deal of normal traffic that legitimate individuals can not actually access the service. An attacker flooding a service with regular traffic can easily expend finite resources such as bandwidth, memory, and processer cycles.A classic memory resource exhaustion DoS is a SYN flood.A SYN flood takes advantage of the TCP three-way handshake.The handshake starts with the client sending a TCP SYN packet. The host then sends out a SYN ACK in response.The handshake is completed when the client responds with an ACK. If the host does not obtain the returned ACK, the host sits idle and waits with the session open. Each open session consumes a certain amount of memory. In the event that enough three– way handshakes are started, the host consumes all available memory waiting for ACKs.The traffic generated from a SYN flood is normal in appearance. The majority servers are configured today to leave just a certain number of TCP connections open. A different classic resource exhaustion attack is the Smurf attack.

A Smurf attack Works by making the most of open network broadcast addresses.A broadcast address forwards all packets on to every host on the destination subnet. Every host on the destination subnet responds to the source address specified in the traffic to the broadcast address. An attacker transmits a stream of ICMP echo requests or pings to a broadcast address.This has the effect of amplifying a single ICMP echo request up to 250 times. In addition. the attacker spoofs the source address in order that the target receives all the ICMP echo reply traffic. An attacker with a 128 Kb/s DSL Net connection can certainly produce a 32 Mb/s Smurf flood. DoS attacks commonly take advantage of spoofed IP addresses because the attack succeeds even if the answer is misdirected.The attacker requires no reply, and in cases like the Smurf attack, wants at any costs to stay away from a response.This can certainly help make DoS attacks difficult to defend from, and even tougher to trace.

Further Reference: http://bbciplayerabroad.co.uk/how-do-i-get-bbc-iplayer-in-france/

Bypassing Web Filters – Simple Method

There are numerous ways in which the internet is filtered, although none of them are completely reliable. In reality the only ‘prefect’ method to block access to specific sites that are available online is to block access to the internet completely. Fortunately with the possible exception of North Korea this method is rarely used and most companies and countries use some other method to control access to websites.

The Western Democracies mostly leave access to the world wide web unfettered, however even these countries will restrict certain criminal sites. Companies normally will block access to sites which could cause them legal or productivity issues. After all, why leave access open from the company network to somewhere like Facebook when it serves little business purpose. What is more your employees are likely to waste many hours on such sites when they really should be working.

However there are also restrictions and blocks placed by the websites themselves. These are for a variety of reasons but mostly due to copyright and profit maximisation ones, they are probably most commonly found in large media sites who want to block access to their content outside the domestic market so that they can resell elsewhere. Mostly these blocks are quite simple ones, where the IP address of the inbound connection is looked up in database then either allowed through or blocked depending on which country it’s originated from. This method is actually very easily to bypass as all you need to do is mask your real address and present one from the specified country.

The easiest method by far to do this is to route your connection through an intermediate server. Originally most people used a proxy server for this, simply because free ones were readily available all over the internet in different locations. Most sites now can detect and automatically block these servers though so using a proxy for bypassing blocks is fairly redundant now. The new method is pretty similar but involves a VPN connection instead of a simple proxy. The advantage is that the connection is encrypted and virtually impossible to detect easily, although the Chinese have made some progress in this.

Providing the VPN server is located in the correct location it should allow unrestricted access to whichever site is accessed. So for example if you were in Paris you’d need to find a VPN server in the UK to access the BBC iPlayer in France. This is because when you connected to the BBC website it would only see the IP address of the VPN server and presume you were in the United Kingdom. Although the VPN cannot be detected directly, the IP addresses are vulnerable to detection and indeed some are blocked.  The main method for detecting and blocking addresses of VPN servers is by monitoring concurrent connections.  An overloaded VPN server will have hundreds of connections originating from a single IP addresses, it can be presumed that this is a relay server of some sort and they will often be blocked.

Jim Reeves

Website

No Comments Networking, News

Packet Sniffing for Beginners

Sometimes there are errors and problems on a network that need in depth analysis. Troubleshooting some issues can be almost impossible without using a tool to investigate deeper such as a packet sniffer. Often you won’t be able to find that issue with a non-responsive share or the reason that your RAS server is so slow is because all your travelling sales people are using it to watch BBC TV abroad when they’re travelling!

If a certain error condition occurs only when the request is coming from an actual client, but not when using telnet, packet sniffing is in order.
Sometimes, using telnet may be complex, because the proxy and origin servers may require authentication credentials to be sent. In those
cases, it is more convenient to use a real Web client that can easily construct those headers. Also, if a problem exhibits itself with a certain client,
but not with others, it is worthwhile to find out exactly what is being sent by the client.

There are a number of packet sniffers. Depending on the operating
system, you may find some of these useful.
° wireshark
° ethereal
° etherfind
° tcpdump
° nettl

Many books and instructions will pick a specific packet sniffer to use so if you’re following a guide use this. One of the most popular is Wireshark which is a fully functional and free packet sniffer often used by professionals instead of more costly commercial options.
Many of the most comprehensive are actually distributed as part of Unix and Linux distributions and you’ll have to refer to the UNIX man pages for instructions for the others.
Example. Let’s say you want to snoop the traffic between the hosts fred’s PC (client) and socrates (server). You can use something like Wireshark to track the traffic between the two endpoints and analyse what’s happening between them.

Of course, this only is useful if you can initially identify which sources to monitor. If you suspect that Fred is using the company proxy for Netflix then you can prove the point easily using a packet sniffer. If you’re not sure then you may have to look first to the network hardware for clues, checking switches and hubs for span ports and plugging into them is a useful tactic. These ports typically mirror all the traffic being carried over the active ports meaning you can use the span port to track all the data on that device.

The ability to specify a port is essential and all decent packet sniffers will allow this. Also you should be able to use switch options to control how the traffic should be dumped. That is to specify exactly what format the traffic should be returned in, this is useful as it helps in the analysis stage. Any packet sniffer which doesn’t do this will make the next stages much harder as the amount of data produced will often be very large.

Proxy – Access Control Methods

When you think initially about access control to a standard proxy one of the most obvious options is tradtional user name and password. Indeed access control by user authentication is one of the most popular methods if only because it’s generally one of the simplest to implement. Not only does it use readily available information for authentication it will also fit neatly in with most corporate networks which generally run on a Windows or Linux platforms. All common OS’s support user authentication as standard and normally using a variety of protocols.

Access control based on the username and group is a commonly deployed feature of proxies. It requires users to authenticate themselves to the proxy server before allowing the request to pass. This way, the proxy can associ- ate a user identity with the request and apply different restrictions based on the user. The proxy will also log the username in its access log, allowing logs to be analyzed for user-specific statistics, such as how much bandwidth was consumed by each user. This can be vital in the world of high traffic multimedia applications and a few users using your Remote access server as a handy BBC VPN service can bring a network to it’s knees.

Authentication There are several methods of authentication. With HTTP, We/9 servers support the Basic authentication, and sometimes also the Digest authentication (see HTTP Authentication on page 54). With HTTPS—— or rather, with any SSL-enhanced protocol—certificate-based authentication is also possible. However, current proxy servers and clients do not yet support HTTPS communication to proxies and are therefore unable to perform certificate-based authentication.

This shortcoming will surely be resolved soon. Groups Most proxy servers provide a feature for grouping a set of users under a single group name. This allows easy administration of large numbers of users by allowing logical groups such as admin, engineering, marketing, sales, and so on. It will also be useful in multinational organisations where individuals may need to authenticate in different countries and using global user accounts and groups. So if a UK based salesman was travelling in continental Europe he could use his UK account to access a French proxy and use local resources.

ACCESS CONTROL BY CLIENT HOST ADDRESS An almost always used access control feature is limiting requests based on the source host address. This restriction may be applied by the IP address of the incoming request, or the name of the requesting host. IP address restrictions can often be specified with wildcards as entire network sub- nets, such as 112.113.123 . * Similarly, wildcards can be used to specify entire domains: * . yoruwebsite.com

Access control based on the requesting host address should always be performed to limit the source of requests to the intended user base.

Using Round Robin DNS

A common method of name resolution is to use a method called round-robin. This method maps a single host name to multiple different physical server machines, giving out different IP addresses to different clients. Load balancing is treated in more detail later in this blog (see name resolution methods). With round-robin DNS, the user is unaware of the existence of multiple servers.

The pool of servers appears to be a single logical server as it has only a single name used to access it. Redirections

Another mechanism available for Web servers is to return a redirection to a parallel server to perform load balancing.

 

For example,

  • upon accessing the URL http: //WWW.mywebsite.com/
  • the main server WWW. mywebsite.com will send an HTTP redirection to URL http: //WWW2. mywebsite.com
  • Another user may be redirected to a different server: http://WWW4. mywebsite.com/

This way, the load can be redirected by the main server WWW to several separate machines Wwwl, WWW2, …, Wwwn. The main server might be set up so that the only thing it does is perform redirections to other servers. There is often a misconception regarding this scheme where it is thought that every request would still have to go through the main server to get redirected to another server.

On the contrary, for any given client, there is only a single initial redirection. After that, all requests go automatically to the new target server, since the links within the HTML text are usually relative to the server where the HTML file actually resides.  It can cause some difficulties in certain situations where there are cached cookies for example, perhaps if you access one of the many BBC servers to watch Match of the Day online like this site.

With this method, the user is aware of the fact that there are several servers, since the URL location field in the client software will display a different server name than originally accessed. This is usually not an issue, though. The entry point to the site is still centralized, through the main server, and that’s the only address they ever have to remember.

However, bear in mind that users may place a bookmark in the client software pointing to one of the several servers sharing the load-—not the main server. This means that once a server name is introduced, say WWW4, there may forever be references to that machine on users’ bookmark files.   Remember that these destinations may be slightly different if the web page is accessed through a bookmark so don’t expect the exact same result.

Although using this round robin method for name resolution is extremely common, don’t assume it’s always deployed.  There are many other methods using variations of this method including different types of redirection or mirroring.

John Hughes

Website

No Comments Networking, Protocols

Planning your Security Assessment

Starting a full security risk assessment in any size of organisation can be extremely daunting if it’s something you’ve never tried before. However before you get too involved in complicated charts, diagrams and long drawn out forms and flowcharts it’s best to take a step back. There’s a simple goal here and that’s to try and assess and address any security risks in your organisation. It’s presumably a subject you will have some opinion and knowledge about so try and focus and don’t turn the exercise into something too complicated with little practical use.

Many people, when questioned as part of a risk assessment will prepare an answer, they will start to look at the nuts and bolts of the system. They’ll give opinions on just how this and that widget is weak, and how someone can get access to them and people documents, and so forth and so on. That’s just a technical evaluation of the system, which might or might not be useful. Whether or not it’s useful will be based on the answer to an essential question. The experienced safety professional will have asked this question before answering the enquirer.  If the system is not being used for it’s intended purpose that’s a completely different issue but it obviously would impact security in certain instance.

For example if company PCs are being used to stream video or route to inappropriate sites to watch ITV Stream abroad whilst at work, this introduces additional risks.  Not only could the integrity of the internal network be affected, the connection will also effect the speed while streaming large amounts of video across the network.  There is no doubt that this behaviour should be flagged if encountered within the assessment although it’s not a primary function of the investigation.

The important question is: What do you mean by secure?  Security is a comparative term. There’s not any absolute scale of unhappiness or level of security. Both conditions, secure and security only make sense when translated as attributes of something you consider precious. Something that’s somehow the risk needs to be secured. How much security does this need? . Well that depends on the value and upon the operational threat. How do you measure the operational threat? . Today you’re getting into the real questions which will lead you to an understanding of what you actually mean by the term secure. Measuring and prioritizing business risk security is utilized to defend things of value.

At a business environment things which have value are usually called assets. If assets are somehow damaged or destroyed, then you may suffer a business impact. The prospective event by which you are able to suffer the harm or destruction is a danger. To prevent threats from crystallising into loss events that have a business impact, you use a coating ol protection to maintain the threats from your assets. When the assets are badly protected then you’ve a vulnerability to the danger. To enhance the security and reduce the vulnerability that you present security controls, which may be either technical or procedural.

The process of identifying commercial assets, recognizing the threats, assessing the degree of business impact that could be suffered if the threats were to crystallize, and analysing the vulnerabilities is known as operational hazard assessment. Implementing suitable controls to put on a balance between usability, security, cost along with other business needs is called operational hazard mitigation Operational hazard assessment and operational hazard mitigation collectively comprise what can be call til operational risk management. Later chapters in this book examine operational risk management and will help you deal with actual incidents such as people trying to watch the BBC abroad on your internal VPN server!  The main thing you will need to comprehend this stage is that hazard management. All about identifying and prioritizing the dangers throughout the hazard assessment l procedure and degrees of control in line with these priorities.

So what’s a Digital Certificate?

We’ve probably all seen those simple diagrams where an electronic signature authenticates the key pair used to create the signature. For electronic commerce, authenticating a key pair might not be adequate. For business transactions, each key pair needs to be closely bound to the consumer that owns the key pair. An electronic certificate is a credential that contrasts an integral pair to the entity that owns the key pair. Digital certificates are issued by certification authorities,
therefore we trust the binding prescribed with the certificate.

A Digital signature is fine for verifying e-mail, but stronger verification methods are needed to associate an individual, like the demonstration in our earlier post where we used it to allow access to an app for watching the BBC News abroad. to the binary bits on the network that are purporting to “belong” to Tom Smith. For electronic commerce to work, the association has to be of a power that is legally binding. When Tom Smith has an electronic certificate to advertise to the planet at large, he is in possession of something which might take more trust than the “seal” that was made by his own digital signature.

You might trust his digital signature, but what if a few other believed authority had trust in Tom Smith?
Wouldn’t you then trust Tom Smith a little more? A digital certificate is given by an organization that has a reputation to defend. This organization, known as the certificate authority (CA), may be Tom’s employers, an independent organization, or the government. The CA will take measures to set some truths about Torn Smith before issuing a certificate because of him.

The certificate will normally hold Tom’s name, his public key number, the serial number of the certificate itself, and validity dates (issue and expiry). It’ll also bear the name of the issuing CA. The whole certificate is digitally signed by the CA’s own private key.

Lastly we’ve achieved a mechanism which may be used to allow individuals who’ve no previous relationship to set each other’s identity and participate in the legal transactions of electronic commerce. It’s certainly more efficient and secure than using something like geo-location which simply determines your identity based on your location. So for example, a web site might determine nationality by using your network address – e.g a British IP address needed to access the BBC online.

Certificates, if delivered correctly, inspire trust among Internet traders. It’s not, however, as easy as it might sound.
Certificates expire, are missing, are issued to the wrong person, or have to be revoked because the detail held on the certificate is wrong–maybe the people key number was threatened–and this leads to a large Certificate Control effort or even a campaign.

The X.509 v3 certificate format is a standard used for public important certificates and is broadly used by Internet security protocols (like SHTTP). Based on X.509 v3, digital certificates are being used increasingly as electronic credentials for identification, non- repudiation, and even authorization, when making payments and conducting other business transactions on the Internet or corporate Intranets.

Just as within our credit card system of today, where millions of credit card numbers issued by any bank in the world are electronically confirmed, so it will be the use of digital certificates will demand a clearing house network for certificate confirmation of a comparable scale.

Single proxies or proxy arrays?

If you’re working in a small business or network then this issue will probably never arise. However with the growth of the internet and web enabled devices and clients it’s an issue that will almost certainly effect most network administrators. Do we just keep adding an extra proxy to expand capability and bandwidth or should you install an array.

Nevertheless the solution can be dependent on a variety of external factors. for example in the event the corporation is concentrated in a single location, just one level of proxies is a better solution. This reduces the latency as there’s only a single additional hop added by proxies, as opposed to two or more with tree structured proxy hierarchies.

Although the general rule would be to have one proxy server for every 5000 (possible, not simultaneous) users, it doesn’t automatically mean that a company with 9000 users should have 3 departmental proxies, that are then chained to some most important proxy.

Instead, the 3 proxies might be installed in parallel, using Cache Array Routing Protocol (CARP) or another hashbased proxy selection mechanism. Larger corporations with in-house programming skills may have resource to create custom solutions too which work better to a specific environment which perhaps incorporates remote VPN access to the network too. For example many larger environments have different levels of security in place and have various zones which need to be isolated, generic ‘serve all’ proxies can be a significant security issue in these environments.

This approach can also combine multiple physical proxy caches into a single logical one. Ln general, such clustering of all proxies is recommended as it increases the effective cache size and eliminates redundancy between individual proxy caches. Three proxies, each with a 4 gigabyte cache, would give an efficient 12 gigabytes of cache when put up in parallel,as opposed to only about 4GB if used individually.

Generally, some quantity of parallelization of proxies into arrays is obviously desired. Nevertheless, the network layout might dictate that depart psychological proxies be utilized. That is, it is not feasible to have all of the trafc originating from the entire company go through one array of proxies. It can cause the entire array to become a 1/ O bottleneck, even when the individual proxies of the variety have been in individual subnets. The load created by the users can be so high that the subnets leading to the proxies may choke. To alleviate this, some departmental proxies need to be deployed closer to the end customers, in order that a number of the traffic created by the users will not reach the main proxy array.

Failover? Since proxies are a centralized point of traffic it’s vitally important that there is a system in place for failover. If a proxy goes down, users will instantly lose their access to the internet. What’s more it may be that many important applications rely on permanent internet access to keep running. They might need access to central database systems or perhaps need frequent updates or security patches. In any ways, internet access is often much more crucial than simply the admin office being able to use Amazon, surf UK TV abroad or check the TV schedules online.

Failover might be achieved in many various ways. There are (relatively expensive) hardware solutions which transparently change to a hot standby system in the event the primary system goes down.  You can usually choose between configuration and restore scenarios, there’s the choice to invest in residential ips proxies too.

Nevertheless, proxy autoconfiguration and CARP provide more cost effective failover support. During the time of this writing, there are a couple areas in customer failover sup port which might be improved. Users have a Propensity to detect a intermediate proxy server going down by seeing fairly long delays, and possibly error messages. A proper proxy back up system should be virtually seamless and provide similar levels of speed and bandwidth than the primary system.

Security and Perfomance – Monitoring User Activity

When analysing your server’s overall performance and functionality one of the key areas to consider is that of user activity.  Looking for unusual user activity is a sensible option in identifying potential system problems or security issues.  When a server log is full of unusual user activity you can often use this information to track down the potential issues very quickly.  For example by analysing these issues from your system logs then you can often identify trends in authentication, security problems and application errors.

Monitoring user access to a system for example will allow you to determine usage trends such as utilization peaks.   Often these can cause many sorts of issues, from authentication problems to very specific application errors.  All of this data will be stored in different logs depending on what systems you are using, certainly most operating systems will record much of this by default.

Using system logs though can be difficult due to the huge amount of information in them. It is often difficult to determine which is relevant to the health and security of your servers.  even benign behaviour can look suspicious to the untrained eye and it is important to use tools to  help filter out some of the information into more readable forms.

For example if you see a particular user having authentication problems every week or so, then it is likely that they are just having problems remembering their passwords.   However if you see a user repeatedly failing authentication over a shorter period of time, it may illustrate some other issues.  For example if the user is trying to access the external network using a German proxy server then there would be an authentication problem as the server would not be trusted.

Looking at issues like this can help determine user activity that causes a security breach.  Obviously it is important to be aware of the current security infrastructure in order to interpret the results in these logs correctly.   Most operating systems like Unix and Windows allow you to configure the reports to record different levels of information ranging from brief to verbose.

If you do set logs to record verbose information it is advisable to use some sort of program to help analyse the information efficiently.  There are many different applications which can allow you to do this, although some of them can be quite expensive.  There are simpler and cheaper options though, for example the Microsoft Log Parser is a free tool which allows you to run queries against event data in a variety of formats.

Log parser is particularly useful for analysing security events, which are obviously the key priority for most IT departments in the current climate.    These security and user authentication logs are the best way to determine whether any unusual activity is happening on your network.  For example anyone using an stealth VPN or IP Cloaker like this one, will be very difficult to detect by looking at raw data from the wire.  However it is very likely some user authentication errors will be thrown up from using an external server like this.  For instance most networks restrict access to predetermined users or ip address ranges and these errors can flag up behaviour very quickly.

No Comments Networking, Protocols, VPNs

Code Signing – How it Works

How do you think that users and computers can trust all this random software which appears on large public networks?  I am of course referring to the internet and the requirement most of us have to download and run software or apps on a routine basis.  How can we trust that this is legitimate software and not some shell of a program just designed to infect our PC or steal our data?  After all even if we avoid most software, everyone needs to install driver updates and security patches.

The solution generally involves something called code signing which allows companies to assure the quality and content of any file released over the internet.    The software is signed by a certificate and as long as you trust the certificate and it’s issuer then you should  be happy to install the associated software.    Code signing is used by most major distributors in order to ensure the quality of released software online.

Code Signing – the Basics
Coed signing simply adds a small digital signature to a program, an executable file, an active X control, DLL (dynamic link library) or even a simple script or java applet. The crucial fact is that this signature seeks to protect the user of this software in two ways:

Digital signature identified the publisher, ensuring you know exactly who wrote the program before you install it.

Digital signature allows you to determine whether the code you are looking to install is the same as that was released. It also helps to identify what if any changes have been made subsequently.

Obviously if the application is aware of code signing this makes it even simpler to use and more secure. These programs can be configured to interact with signed/unsigned software depending on particular circumstances. One simple example of this are the security zones defined in Internet Explorer. They can be configured to control how each application interacts depending on what zone they are in. There can be different rules for ‘signed’ and ‘unsigned’ applications for instance with obviously more rights assigned to the ‘signed’ applications.

In secure environments you can assume that any ‘unsigned’ application is potentially dangerous and apply restrictions accordingly. Most web browsers have the ability to determine the difference between these applications and assign security rights depending on the status. It should be noted that these will be applied through any sort of connection or access, even a connection from a live VPN to watch the BBC!

This is not restricted to applications that operate through a browser, you can assign and control activity of signed and unsigned applications in other areas too.  Take for instance device drivers, it is arguably even more important that these are validated before being installed.  You can define specific GPO settings in a windows environment to control the operation and the installation of a device driver based on this criteria.  These can also be filtered under a few conditions, for example specifying the proxy relating to residential ip addresses.

As well as installation it can control how Windows interacts with these drivers too,  although generally for most networks you should not allow installation of an unsigned driver.  This is not always possible though, sometimes application or specialised hardware will need device drivers where the company hasn’t been able to sign the code satisfactorily.   In these instance you should consider carefully before installing and consider the source too. For example if you have downloaded from a reputable site using a high anonymous proxies to  protect your identity then that might be safer than a random download from an insecure site, there is still a risk though.