Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 561/773 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • squid configuration change to accept http request on LAN

    - by Ratan Kumar
    installed squid + dansguardian to block adult content on my linux (ubuntu 12.10) . everything worked fine. it has blocked as expected . now the problem is i am also running an apache server for my LAN . ( kind of website ) but when accessing it via 192.168.0.1 , it says squid has blocked the connection , this is the exact error The following error was encountered while trying to retrieve the URL: http: //192.168.0.16/ Connection to 192.168.0.16 failed. The system returned: (113) No route to host The remote host or network may be down. Please try the request again. Your cache administrator is webmaster. before configuring the squid it was working fine . what changes in the squid.conf i have to make . i tried acl Safe_ports 80 allow_all Safe_ports ( i want to know how i can configure it again to listen HTTP request from LAN )

    Read the article

  • Deafault password for hp c7000 blade servers.

    - by stillStudent
    Has anybody worked on hp c7000 blade servers before. I have just ordered them and brought them on network. I tried logging in from web browser but I do not have the default password to log in. User guide says default password is provided with the tag supplied with the KVM module but I did not find any such tag. I am also working with hp customer care but it's taking too much time. So posted here to get some quick help from you guys. Thanks!

    Read the article

  • data replication from a production web server back to the staging web server

    - by Dennis Smith
    Have two web servers, development/staging and production. Code and some documentation is moved from the staging area to production either through on-demand jobs or nightly via a global replication job. The production server of course sits isolated in a DMZ. There is some content that gets uploaded to the live server that needs to be replicated back to staging. Our security team is locking the network down (and they should) and restricting access to the live server. Best suggestions for replication of "live" data back to "stage" and backing up the live server also.

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Now, I validated that the username and password are correct, and tried to login with domain name and without. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance!

    Read the article

  • Adding a second IP address for IIS - static vs dynamic A records

    - by serialhobbyist
    I'm looking to add a second IP address to IIS so that I can run two sites with different SSL certificates. When I added one on my play box and ran ipconfig /registerdns both addresses were registered in DNS with the server's name. So, I deleted the A record for the new IP address and rebooted. That also registered both names. So, then I went into the network config for the adapter and, on the DNS tab, unchecked "Register this connection's addresses in DNS". I deleted the A record for the new IP address again and re-ran ipconfig /registerdns. This time, it deleted the A record for the old IP address and didn't created one for the new address. Neither of these is what I want: I want the main IP address to be registered and refreshed automatically as a dynamic DNS record and the second IP address to be registered and managed as a static address. Is there any way to achieve this?

    Read the article

  • WLAN LED randomly blinking when there is no traffic

    - by mrc
    Hi, I've got a Linksys WUSB54GC WLAN USB interface (Ralink chipset) and I'm running Debian GNU/Linux 6.0. The LED very often randomly blinks although there is no traffic in the network. I checked this with Wireshark. Sometimes, but rarely, the LED stops blinking. The issue is present in Ubuntu and Fedora too. It was not present in Debian Lenny with Linux 2.6.28. I checked an old live cd with ubuntu 8.10 with kernel 2.6.27 and it was also OK. So I guess that's an issue with Linux kernel and its wireless driver or firmware. Has anybody observed a similar thing? Does anybody know how to help this annoying blinking? Thanks.

    Read the article

  • Pxe boot ubuntu server - corrupt packages

    - by Stu2000
    I have set up a cobbler pxe boot server and managed to get centos5.8 to fully automatically install. Unfortunately with Ubuntu 12.04-server-i386, it stops mid-way through with a message stating that packages are corrupt. I tried following this tip to unzip the Packages.gz file which results in an empty Packages file with nothing in it. Other people suggested doing a touch command which essentially does the exact same thing, an empty Packages file. That results in me getting a different message that states: Couldn't retrieve dists/precise/restricted/binary-i386/Packages. This may be due to a network..... Does anyone know how to work around this issue? Hitting continue before having made the tip/workaround resulted in ubuntu installing fine, but I need to be able to provide no manual input. Any advice appreciated, Stu

    Read the article

  • Can't access DD-WRT set in DHCP forwarder

    - by Chris Marisic
    This might sound amazingly stupid but I have a DD-WRT (micro) router that I placed in my network to act as a switch and as a WAP while relying on my cable modem to be the DHCP server. In my attempts to get the configuration right I put the router into DHCP forwarder which then made everything work correctly. The problem is now I can't seem to connect to the router in any fashion to add mac addresses to the WAP access list. Is there something I'm missing or do I need to try a hard reset on it and reconfigure everything?

    Read the article

  • I just ordered 70/10 line, and need a new router I think?

    - by data_jepp
    Before I had the 25/5 line and the n standard router did just fine. Now it doesn't do the job. Online speedtest reads at 82 so I have the line. But my laptop is getting less than 30 in my room. My laptop has the following WiFi card: http://www.intel.com/content/www/us/en/wireless-products/centrino-advanced-n-6205.html What is this talk about 2,5 and 5ghz? Can my laptop be connected at once over both bandwidths? And that would let me use the full 70Mb over wifi? Hope it's ok to ask network questions here.

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • WSUS setup on ADS Domain Controller VM

    - by NickC
    How should I setup sharing/directory permissions to allow the following to work: VMHost - Hyper-V partition (not part of domain) ADServer - Active Directory Domain Controller running as guest VM on VMHost \VMHost\Updates disk share for WSUS to put the updates and other local files Problem is what permissions do I need to give to \VMHost\Updates to allow WSUS to work with this directory to avoid "The WSUS content directory is not accessible" error which I am currently seeing. As far as I can tell WSUS runs as "domain/Network Service" account. Question is without adding VMHost to the domain how do I give that user appropriate permissions to this directory? Is there a way that VMHost can be told to trust ADServer and then be able to use users accounts from there? Sort of relates to my other post here: Win 2012 Domain controller VM, should the Hyper-V host be part of the domain? Thanks, Nick

    Read the article

  • Suddely internet is not accessible

    - by user189708
    I am going crazy here. One day everything was working fine. I turned pc off and went to sleep. Next day turn pc on and cannot access internet (from any browser). The situation is: I cannot open any webpage from browser (tried Firefox and Epiphany) and cannot receive emails in thunderbird. BUT if I run firefox from console as sudo, I can use it as usual. I can access Skype and pretty much any other network stuff (like installing software with apt-get etc.), also if I use Astrill VPN software I can access webpages even running without sudo. I haven't install any software or anything like that for several days = I have not a clue what could cause this. Just by the way, other Win PC in our home has no issue. Here is what I have tried to fix this: I have tried to restart my pc, router, modem - multiple times I have tried to change permissions to my firefox profile I have tried to completely re-install firefox and start with blank profile, thus no addons I have tried to change /etc/resolv.conf to an IP of my router (it was 127.0.1.1) I have tried to change my hostname (from tomino-NB to tominoNB) I think I might try even more stuff. None of it works. Can someone please try to help me. Thank you UPDATE 1: I have tried this: Removing resolv.conf - Didn't help Also "ping" and "dig" commands cannot resolve host UPDATE 2: I have tried to edit nameservers in resolv.conf but still no effect. I can ping router as well as I can ping outside IP. So definitely just some DNS issue. Is it possible that something is rewriting path to resolv.conf and using different file? UPDATE 3: I have just restarted PC and everything works now... resolv.conf went back to nameserver 127.0.1.1 . I have no clue what happened that it works again...

    Read the article

  • Wired and wireless conections: force Windows to connect to laptop through Ethernet?

    - by danielkza
    I have a desktop connected to the internet and to my home network through Wi-Fi, and a latptop connected to said desktop through an Ethernet cable. But Windows seems to only reach the laptop through Wi-Fi: I want to transfer files through the wired connection instead. Setting up Internet Connectin Sharing and disconnecting the laptop from Wi-Fi altogether doesn't seem like the most elegant solution to me. I also thought about going to the hosts file and setting up the IP address manually, but that would make the laptop completely unavailable if it's not wired, which happens quite often unfortunately. Is there any way for me to tell Windows to use the wired connection for a particular host if possible, and fallback to any other route it finds otherwise?

    Read the article

  • How can I migrate websites between two identical servers?

    - by edude05
    I have two servers that have a nearly identical (software) configuration. We are upgrading the web servers to run windows server 2008 R2, one already is however the main one (that currently has sites) is on WS2008. Now, the old server is ns.mydomain.com and the new server is ns1.mydomain.com. Since dns automatically fails over to ns1.mydomain.com I'd like a way to move all the vhosts to the new server. Is there an automatic way to move / recreate all the vhosts on the new server? I have figured out how to migrate the DNS records already DNS Migration and since both servers are on the same private network migrating the website data isn't a large issue. Every site is running PHP & MySQL and the MySQL server is external so the records won't have to be moved. Thanks

    Read the article

  • P2V using Acronis True Image Home 10 and Windows 7

    - by Anthony
    I have a full system image using Acronis True Image Home 10 and want to run it as a virtual machine on Windows 7 Professional. I have created a virtual machine but Windows Virtual PC doesn't allow access to a USB external hard disk when booting from the Acronis Recovery CD. I've copied the backup onto the host machine and I can access it via the network using the Acronis boot CD but I'm wondering if there is an easier way? Does any other free Virtual Machine software support USB devices during boot (i.e. I can restore a backup image from the USB hard disk directly)

    Read the article

  • IIS site hacked with ww.robint.us malware

    - by sucuri
    A bunch of IIS sites got hacked with a javascript malware pointing to ww.robint.us/u.js. Google cache says more than 1,000,000 different pages got affected: http://www.google.com/#hl=en&source=hp&q=http%3A%2F%2Fww.robint.us%2Fu.js http://blog.sucuri.net/2010/06/mass-infection-of-iisasp-sites-robint-us.html My question is: Did anyone here got hacked with that and still have any logs (or network dump) available for analysis? If you do, have you spotted anything interesting in there? Sites as big as wsj.com got hacked and some people are saying that maybe a zero-day on IIS/ASP.net is in the wild...

    Read the article

  • What prevents Windows from entering standby?

    - by Arne
    Is there a tool or method to find out, what prevents Windows Vista 32 bit from entering the standby mode? My home pc is set to standby after a certain time of inactivity. But quite often it just stays awake and I have know idea why. I know of things that will keep it awake and already checked devices and network to no avail. I suspect there is some application or service that tells the os to stay active and I hope there is some tool to find the culprit.

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • SSH Tunnel for Remote Desktop via Intermediary Server

    - by Mihai Todor
    I've seen many examples of SSH tunnels on the nets, but I'm still having no luck with this. Here's the setup: Windows 7 PC in a private network, sitting behind a firewall, with PowerShellInsider SSH server set up and working fine. Public access Linux server, which has access to the PC. Windows 7 laptop, at home, from which I wish to do remote desktop on the PC. Now, here's what I've tried so far: SSH tunnel from my laptop to the Linux server: ssh -f my_user@LINUX_SERVER -L 6666:LINUX_SERVER_IP:6666 -N SSH to the Linux server where I've set up a tunnel to the PC: ssh -f 'PRIVATE_DOMAIN\my_user'@PC_NAME -L 6666:PC_IP:3389 -N Unfortunately, I must be doing something wrong, because it doesn't seem to work. Any ideas why or, at least, any suggestions on how can I try to debug this setup? At the moment, I have access to all 3 machines (non-root on Linux), so I can test whatever I want...

    Read the article

  • Can I run AD commands from a standard PowerShell script?

    - by Ben
    I am putting together a script to run post-sysprep. It should check if the machine is on the network, and if it is then it should query AD to see if a computer account exists with it's service tag (we're using these as the hostnames of the machines.) If it does exist, it should delete the account and rejoin the machine to the domain. I have got the majority of the script running, but need to run the following: Remove-ADComputer -Identity $distinguishedName How can I run this from the "standard" powershell environment? I don't want to use the AD module. (By the way - I'm on a mixed mode 2000/03 domain as we are in the process of upgrading to 2008) I'm new to PowerShell so be gentle if I'm completely missing the point! Thanks, Ben

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • How to connect two ubuntu computers with ethernet cable

    - by Lukasz Zaroda
    I'm trying to connect with ethernet cable two computers - desktop and laptop. What I want to do is transfer a lot of data from one to another. The problem is that I'm doing everything from: How to network two Ubuntu computers using ethernet (without a router)? But after that, ping always gives me "Destination host unreachable". I was searching a while but couldn't figure out what is a reason it doesn't work, maybe it's something about my devices or maybe someone will have another idea. Ethernet cable I got with my router. There is a text printed on it: Aurit Data Cable Cat.5 UTP 26AWG 4PAIR AWM PUC 75°C EIA/TIA 568B It's connecting now my desktop to router, so I can send this question. My desktop: System: Ubuntu 12.04 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) "ethtool -i eth0" output: driver: r8169 version: 2.3LK-NAPI firmware-version: rtl_nic/rtl8168d-1.fw bus-info: 0000:01:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes My laptop: System: Ubuntu 14.04 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 08) "ethtool -i eth0" output: driver: alx version: firmware-version: bus-info: 0000:01:00.0 supports-statistics: no supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no My iptables are accepting everything. Any ideas why I cannot reach other computer?

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Program to see, which programs use the internet-connection?

    - by Martti Laine
    Hello Few days ago I opened my computer as I always do after school, and got pretty amazed about my 1.27kb/s download-speed. It has continued for few days already. We have a wireless network, which is used by 3 computers. Normally I've gotten 200kb/s (I think we have a 2mb-connection) but now it just suddenly slowed down. My friends have the same service-provider, but no problem. So, is there any kind of program, which would show me all the programs using connection and how much. It must be a program open which just takes all speed off. Any help is appreciated, Martti Laine

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >