Search Results

Search found 18700 results on 748 pages for 'isolated network'.

Page 554/748 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • The Importance of Fully Specifying a Problem

    - by Alan
    I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung. Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it's threads spinning on busy wait type locks). This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced. This prompted the question back to the customer: What exactly were you seeing that made you believe that the system was hung? It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing "root" and hitting return, the console was no longer responsive. This description puts a whole new light on the "hang". You immediately start thinking "name services". Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network. Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open. My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server. Moral When you log a service ticket for a "system hang", it's great to get the forced crashdump first up, but it's even better to get a description of what you observed to make to believe that the system was hung.

    Read the article

  • Configure SMTP server windows

    - by Jake
    I am configuring a local network and for some reason I can't get server to send an email. I already install the SMTP server and configured using this tutorial http://www.itsolutionskb.com/2008/11/installing-and-configuring-windows-server-2008-smtp-server/ but when I try to send an email using code, the email gets pickedup from mailroot/pickup and dropped in mailroot/queue and stays in queue forever, it never goes anywhere, I even tried dropping a basic mail.txt file with this in it: to:[email protected] from:[email protected] subject:This is a test. this is a test. still the same thing happens. Is the smtp server not configured right, is their something else I am missing, because this is my first time setting up an smtp server

    Read the article

  • Need disc image help pronto!

    - by data
    I recently got a job as a junior network administrator. Last week the senior admins did their yearly reinstall of server 2003, exchange, drivers etc on the main server. I've been asked to back up the disc so that next year they can just copy over the pre-made image. What tools can i use to achieve both the creation of the entire servers HDD image and loading it back on (id like to test it in the sandbox.) To impress them, a program that is free is preferable. And maybe a tool that can do it all from booting the program off of a USB drive.

    Read the article

  • What's static.ak.fbcdn.net that appears on the status bar of my browser everytime Facebook is loading?

    - by Maverick
    I find the message: "waiting for static.ak.fbcdn.net..." on the status bar of my browser everytime I load Facebook and many a times even while loading other websites. I searched on net and found out that static.ak.fbcdn.net stands for static akamai facebook content delivery network. I reckon that static.ak.fbcdn.net is the server URL from where Facebook delivers contents to our browser. Am I right? Can anyone elaborate? Also, why does the above mentioned message appear while loading other websites too?

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • Port Forwarding on Actiontec GT704-WG Router Issues

    - by adamweeks
    I am trying to setup a server at customer's location that has the Actiontec GT704-WG DSL router. The port forwarding it not working at all. Here's the details: Server: OpenSuse Linux box with a static IP address of 192.168.1.200 Application running accepting connections on port 8060 Firewall disabled Local connections (within the network) working properly Router: Updated to latest firmware available DHCP range set to 192.168.1.69-192.168.1.199 to not have any conflicts with the server Firewall set to "off" Rule set in the "Applications" setting to forward 8060 TCP and UDP to 192.168.1.200 machine (I've tried using the "TCP,UDP" option as well as both individual options) I've also tried just simply putting the server in the DMZ to see if I could connect to anything, but still nothing. Looking for any clues before I call and waste hours explaining the issue to tech support.

    Read the article

  • Trendnet tew-424ub wireless not working after update 12.10

    - by dwa
    I updated packages from the software manager and now my wireless won't work. It's a Trendnet tew-424ub iwconfig says lo no wireless extensions. eth0 no wireless extensions. sudo lshw -C network: description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 02 serial: 1c:6f:65:46:e9:d4 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.1.137 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:de00(size=256) memory:fdaff000-fdafffff memory:fdae0000-fdaeffff memory:fda00000-fda0ffff lsusb: Bus 003 Device 002: ID 0bc2:3332 Seagate RSS LLC Expansion Bus 003 Device 003: ID 05e3:0605 Genesys Logic, Inc. USB 2.0 Hub [ednet] Bus 003 Device 006: ID 0457:0163 Silicon Integrated Systems Corp. 802.11 Wireless LAN Adapter Bus 005 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 005: ID 0781:5530 SanDisk Corp. Cruzer Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Help? I'm not sure where to start. I've been browsing forums and such for a long time and nothing I try is working.

    Read the article

  • Active Directory remote versus local computer logon

    - by Jake
    Hi, Hope some one can help a network/server noob understand how domains work in AD. I am in an organisation with 2 AD servers in 2 different countries, e.g. US and UK, and they set up the US and UK domains respectively. the accounts are set up such that all employees in both countries have a US\user and UK\user account. What is the difference if a UK user logon with US\user from a local UK computer, versus RDP (remote desktop) into a US server with US\user? Thanks for your help.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Getting error code -41 when copying files to external drive

    - by diego
    I'm having trouble copying some files from my mac to an external hard drive: I keep getting the nondescript "error code -41". I noticed some of the files with an additional "@" permission bit had the "com.apple.quarantine" flag set. I used the "xattr" command from this article What should I do about com.apple.quarantine? to take care of the quarantine flag and sort that out (these files were copied over from another mac on my network, so I guess OS X flagged them as quarantine). That took care of the problem for those files but I still have some that I can't manually copy over to the external drive. The only other thing I've noticed is that some of these files have a an extra permission bit: "drwxr-xr-x+" which I haven't been successful in googling. Aside from that I don't see anything else. Also, Disk Utility says everything's fine. Any help would be greatly appreciated.

    Read the article

  • Problem accessing MICROSOFT##SSEE database (Error: 18456, Severity: 14, State: 16.)

    - by Philipp Schmid
    After an unexpected server shutdown due to a power failure, I can no longer connect to the internal windows database MICROSOFT##SSEE which is hosting Central Admin for my SBS 2008 server. The log shows: Error: 18456, Severity: 14, State: 16. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. [CLIENT: <named pipe>] I've tried to connect using the SQL Management studio (connecting to .pipemssql$microsoft##sseesqlquery) but no luck. The SQL Server Configuration Manager doesn't show a entry for 'Protocols for MICROSOFT##SSEE' (but shows it for 2 other database hosted on the same SQL server 2005 Express edition. I have tried to restore the master.ldf and mastlog.log files from a backup, but the issue persists.

    Read the article

  • How to SSH an outside server from a computer which is behind a proxy firewall ?

    - by Karan
    I access the Internet through an HTTP proxy firewall at college. And I need to login to a computer, via SSH, which is outside our network. I tried it as Linux command and on Windows using PuTTY. I also configured PuTTY to use our server's address. But still, "Proxy error: 403 forbidden" pops up. They must've blocked SSH access to outside systems. (college systems as accessible). I can SSH a web server (not the proxy server) at the college, which I use to browse proxy-free by tunneling. Now this server allows to browse restricted sites, but still no SSH. Any workaround, please?

    Read the article

  • Active Directory - Using GPO To Update Multiple Versions Of .NET

    - by Joe Wilson
    OK, I have searched everywhere for this one. I have all the MSI's and packages I need to deploy .Net 3.5 SP1, and 2.0 and 3.0 (which are prerequisites for 3.5). I can't figure out how to install all of them at once via GPO. Basically, the computers on the network do NOT have any version of .Net installed, and I need them to be at 3.5 SP1. I know I can deploy each version via GPO, force reboot the client, then push the next one, force reboot, and so on. Is there a way to streamline install all 3 at once via GPO? Thanks

    Read the article

  • Are all SFP+ tranceivers usable for FEX between Nexus 5000 and Nexus 2000?

    - by Alain O'Dea
    I am looking at building a network with Nexus 5000 parent switches and Nexus 2000 fabric extenders. The mystery at the moment is what kind of SFP+ tranceivers are required for cross-connecting racks. Right now I am considering FET-10G, but I am not sure that 100m is long enough given the separation between racks is potentially very large since it is a rented rack environment. Are all SFP+ tranceivers usable for FEX between Nexus 5000 and Nexus 2000? Specifically, can SFP-10G-SR transceivers be used for longer distance FEX?

    Read the article

  • OpenVZ multiple networks on CTs

    - by user6733
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • What's new at Oracle in Gamification?

    - by erikanollwebb
    It's been a crazy few weeks in Apps UX.  We are actively working on some gamification designs in now 4 different application product areas, as well as supporting some teams in other areas of Oracle.  Since that gets to be a pretty diverse group with a lot of resources and ideas, we've started a group in the Oracle Social Network on Gamification at Oracle.  That's limited to internal users at Oracle, but if you are interested in joining,  ping me directly for more information at [email protected]. We're planning another design jam like we did here at Oracle in May and at the Enterprise Gamification Forum in San Diego in September.  This time, we're taking the show to the UK, and hosting it with a group of customers on the Oracle Usability Advisory Board.  It should be a great event!   We're also actively designing some gamified flows which we'll be testing with users at the UKOUG to see what our customers think about some of our gamification ideas. We're looking at more feedback opportunities.  Internally, we surveyed 444 folks within Oracle about gamification and we'll be posting some of our findings on that here soon.  I'll be posting a blog on gamification for our customers at useableapps.oracle.com  in the next few weeks and I'll cross-post to here when it comes out.  So even though it's been quiet on this blog, we are busy and I'm hoping to push out more content in the next few weeks!  Would love to know what's most interesting to the folks reading so if there's something you especially want to see, feel free to comment or email me about it.

    Read the article

  • Is it a bad idea to make roaming profile share available offline?

    - by Bryan
    This is regarding a Windows 2008 R2 domain. The Documents, Desktop, Application Data folders are all redirected to users' home directory (mapped as Z:). The users home directory is configured to be offline for mobile users. User profiles are configured as roaming, and located on a separate share (not mapped as a network drive), just accessed via an UNC path. Would it be a good or idea to make the roaming profile share available offline for mobile users using the caching option "All files and programs that users open from the share will be automatically available offline"?

    Read the article

  • Windows Login Failure

    - by Chris Bateson
    I'm getting an error in the Event Viewer, which is also generating a lot of Logon Failure messages on our syslog server. Pretty much stuck on how to resolve. EventID: 536 Logon Type: 3 Reason: The NetLogon component is not active This is for a Windows Server 2003 system. I have checked here We're using Shavlik Protect 9 to scan and deploy patches. Shavlik stores the credentials for the systems and uses those stored credentials to deploy patches. This system is able to scan and deploy to other systems on the network using those credentials and no errors are generated. When installing to the local system that Shavlik is physically on then this error is generated. Whats interesting is that it doesn't generate during a scan, and the patches install fine. We've contacted Shavlik to get the response that they are unable to help since it's a Microsoft error. Has anyone seen this?

    Read the article

  • How do I set up a bridge on Ubuntu GNOME 14.04

    - by NJRandy
    I found a guide for setting up a bridge in Fedora and was trying this: $ nmcli connection delete p33p1 $ nmcli connection add con-name br0 type bridge ifname br0 autoconnect yes $ nmcli connection add con-name p33p1 type bridge-slave ifname p33p1 master br0 autoconnect yes I found that $ nmcli con delete uuid [uuid here] accomplished the first step. nmcli connection does not have an 'add' action in this distribution. Please help me do the 2nd and 3rd steps. Context: I am trying to set up a virtual machine. I believe this is a necessary step for the VM to access my network and the internet. Please feel free to correct me if I am wrong! BTW, I am a linux newbie, tech oldie. Thank you.

    Read the article

  • How to reuse backup on Time Machine on Snow Leopard after a logic board change, after choosing wrong

    - by kmiffy
    After my logic board was replaced, I connected my laptop back to my network, and Time Machine gave me a popup, as shown on this thread: http://superuser.com/questions/78068/recycle-time-machine-for-new-machine/78264#78264 I misread the question and clicked on "Create New Backup" when I should have clicked on "Reuse Backup" to connect to my old backup file. How can I trigger that popup again? Turning Time Machine on and off does not work, and the instructions on forums to fix via terminal doesn't work because snow leopard is missing the fsaclctl command (and I'm also not familiar with terminal commands.) Thanks.

    Read the article

  • System freezes for 5 seconds when seeking in or skipping to songs and videos

    - by pragmatick
    When I start playing a new video or MP3 or skip to a time when playing them, my system hangs for a couple of seconds. A restart solves this problem, but only for a while. It does not matter which player I use (VLC, Media Player, Winamp, Zoom Player), which media files or if they are located on a network drive or on the local hard drive. Everything else works flawlessly and after the playing has started, there are not problems - until I switch to another file. Additionally, when the Winamp playlist continues to the next song, the system does not hang. When I skip to the next song manually, the system hangs. I've been using Windows XP for years and consider myself a fairly professional windows user, but I have no idea what could cause this. Dual-core 2Ghz, 2GB RAM, Windows XP SP3, Audigy card with kxproject. Worked flawlessly for years. Would be glad if anyone could help.

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Connecting switch to switch to router

    - by elated
    Hello, Not sure if this is the right place to ask this question. But I have a router which connects to the internet. Now I have a switch connected to this router. I added a lot more computers so I added another switch and connected it to the first switch using a cross-over cable. As soon as I connect it to the first switch, my lights in first switch start blinking like crazy and my entire network simply stops working. The minute I remove the second switch's wire, its all fine again. What could be the problem?

    Read the article

  • how're routing tables populated?

    - by Robbie Mckennie
    i've been reading "tcp/ip illustrated" and i started reading about ip forwarding. all about how you can receive a datagram and work out where to send it next based on the desination ip and your routing table. but what confused me is how (in a home network setting) the table itself is populated. is there a lower layer protocol at work here? does it come along with dhcp? or is it simply based on the ip address and netmask of each interface? i do know (from other books) that in the early days of ethernet one had to set up routing tables by hand, but i know i didn't do that.

    Read the article

  • How to remove the dlinksearch browser search hijack

    - by Bish
    Hi Gang, For the last few weeks all the machines on my home network are having the same problem whilst browsing the internet. When the user enters an invalid URL in the browser address bar, instead of the default browser behaviour, the request is sent to http://www1.dlinksearch.com/. As far as I can tell this is all machines and all browsers. It is so consistent I am wondering whether it has anything to do with our router. We use a DLink DIR-655 router so maybe the clue is in the name :) Anyhow, I cannot figure out how to disable/remove the offending behaviour. I've checked hosts files, spyware, AV etc. etc. Anybody have any ideas? Paul P.S. Apologies if this is not the right place to ask this type of question. I'm a bit stuck

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >