Search Results

Search found 18842 results on 754 pages for 'the machine'.

Page 628/754 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Anyone else experiencing high rates of linux server crashes today?

    - by Bron Gondwana
    Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21.

    Read the article

  • Why would one of my servers stop being able to access other servers by FQDN?

    - by Newlyn Erratt
    I have a number of servers on our local network and our debian server has suddenly stopped being able to access the other servers via their FQDN. Initial symptom was inability to login with Active Directory accounts. On further inspection, this machine, porkbelly, was unable to access our other servers (e.g. bacon and albert) via their FQDN. That is, they can ping albert by running ping albert but not by running ping albert.domain.local though when running ping albert it will be expanded to albert.domain.local. The server is still accessible from other servers via both porkbelly and porkbelly.domain.local. Upon examination of hosts information and running hostname its hostname and FQDN are correct. The resolv.conf appears correct. It contains: domain domain.local search domain.local nameserver 192.168.0.xxx (the nameserver) The dns server is also our Windows AD server. I'm not even sure where to go from here or why dns seems to be partially working though I don't have much experience. Where should I go from here? What might be causing this issue where machines are visible via their hostname but not their FQDN?

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Linksys wireless router will not hardware reset.

    - by Jack M.
    Hello, all. I'm unable to make my router perform a hardware reset, and I cannot understand why. All was working well, except that my iPhone could not connect to the wireless. I found that the router was only allowing AES encryption on WPA2 Personal mode, so I upgraded the firmware. I updated the firmware to Ver.1.06.1, and everything went screwy. The router is no longer showing up in the WiFi list (as Linksys, or its previous network name). Wiring into the router gives me an IP address from my ISP (24.121.121.XXX). Attempting to do a hardware reset, but the power light never starts flashing and the router does not seem to reboot. My machine wired in is still online with no interruption in WoW. Pulling the power cord to force a reset returns it to the same state. I even went so far as to pull up my previous IP address (from DynDNS) and try to connect to that, but it wont even ping. What I'm trying to find out is: Did the new firmware fry the thing, or is there some way to fix this? Thanks in advance for any help.

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • WMI Sensors monitoring

    - by DmitrySemenov
    Monitoring tool Paessler stopped to monitor WMI Windows Sensors Paessler is Updated to version 12.4.5.3165. (10/30/2012 1:44:11 PM) Paessler windows sensors (against windows server 2008 R2 web edition) stopped to work (no changes have been made on server that we monitor) with the message Connection could not be established (80070005: Access is denied - Host: 192.168.2.10, User: Administrator, Password: **, Domain: ntlmdomain:) (code: PE015) However if I go to Virtual machine used to run Paessler and the following cscript runs successfully: strComputer = "192.168.2.10" Set objSWbemLocator = CreateObject("WbemScripting.SWbemLocator") Set objSWbemServices = objSWbemLocator.ConnectServer _ (strComputer, "root\cimv2", _ "Administrator", "pass") Set colProcessList = objSWbemServices.ExecQuery( _ "Select * From Win32_Processor") For Each objProcess in colProcessList Wscript.Echo "Process Name: " & objProcess.Name Next I'm getting output C:\>cscript test.vbs Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz So WMI works a. I gave Administrator credentials for Device to monitor in Paessler setting, the same I used in the script above b. I restarted windows server (broken sensors) - but this didn't help c. I restarted Paessler probe service - no effect any ideas?

    Read the article

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Destroyed user account on OS X with dscl; how to restore? [migrated]

    - by Sam Ritchie
    I was trying to create a new user on my OS X Lion machine, and somehow managed to destroy my own user's account. Here are the steps I took; hopefully someone here can recognize what I did, and maybe identify some way around this. First, I ran these commands: sudo dscl localhost -create /Local/Default/Users/elasticsearch sudo dscl localhost -create /Local/Default/Users/elasticsearch /bin/bash # mistake! sudo dscl localhost -create /Local/Default/Users/elasticsearch UserShell /bin/bash sudo dscl localhost -create /Local/Default/Users/elasticsearch RealName "Elastic Search" sudo dscl localhost -create /Local/Default/Users/elasticsearch UniqueID 503 # MY uniqueID sudo dscl localhost -create /Local/Default/Users/elasticsearch PrimaryGroupID 1000 sudo dscl localhost -create /Local/Default/Users/elasticsearch NFSHomeDirectory /Local/Users/elasticsearch The big mistake I made here was using "503", which was my user's UniqueID. Immediately my shell username changed to "elasticsearch". I fiddled around, tried to change the current user with sudo su -u sritchie, but this didn't work. On restart, only the "Elastic Search" user was available. I logged into the Lion Recovery partition and reset the root password. After logging in as root and checking on the terminal, I made the remarkable discovery that my home folder was totally empty. I deleted the elasticsearch user, but it made no difference. I don't see anything in Deleted Users either. The odd thing is that when I log in now as myself (sritchie) I can see desktop icons with previews. I can even open a few text files from the Downloads folder if I use the dock alias to Downloads. Could this data be hiding somewhere? Any help would be REALLY appreciated! Thanks, Sam

    Read the article

  • 8007064c(2011) and 80280007(2009) persistant after all known repairs

    - by tiu44
    I'm on Windows 7 Home x64, and have ran into a major issue with Live Messenger(which I use daily). I have full offline installers for both 2011 and the last Wave 3 2009(14.0.8117.0416) Suites. Both give the following errors: Live Essentials 2011 Offline installer(official): An unknown error occured. Error:0x8007064c Source WLXSuite WL 2009 offline installer(official): You already have a more recent version of Windows Live. Error: OnCatalogResult:0x80280007 Next steps: If you want to install this older version, first uninstall any later versions that are on your computer. Get help with this error The 2011 installer also says it is updating messenger, I don't select anything else. Then last 2009 installer says their is a newer version that needs uninstalled even after the following procedures. MS Help pages provided all basically lead to using uninstall from control panel. Which I've uninstalled all Live components including watcom safety scanner and portable SQL from. I've followed online instructions for manually deleting folders from Program Files(x86), Appdata, and some others under \User\All Users and the one for the one account on the machine. I've used CCcleaner 3.01, ASC 3.7.3 and Beta 4 with deep scan along with deleting folders, and checked their uninstallers for Live components too, and none were there. wlmuninstaller.exe tool reports nothing, but after a failed install it finds something, but failes to clean it under all user admin privilege. The same errors still occur after all of that. Google searching I see people on forums suggesting reinstalling the OS cause MS doesn't even know how to fix this, but I'm hoping someone here can help. NOTE: I don't have System Restore or any other state freeze utilities going, and I don't have any real time AV going(I sometime scan with defender, anti rootkits, and online scanners). NOTE2:I posted this on windowslivehelp.com, before looking to see if the place was active or not, hoping I can get help here. Thanks

    Read the article

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • Diagnosing PCI issues

    - by dtsazza
    I'm upgrading a PC for a friend, and have run into a problem with upgrading the motherboard. I've been assembling custom PCs for the best part of a decade now, so I'm happy enough with the basics at the very least. The motherboard, CPU and graphics card were all updated at once - after this was done, the machine POSTs but the PCI wireless card, as well as the PCI-E graphics card, do not seem to be recognised at all by the system. No trace of them anywhere in the BIOS, or the POST output, or in Windows. I booted into Linux and ran an lspci which also showed up no sign of them. What is the best step to go about diagnosing this? Is it likely/feasible that the motherboard's PCI bus is just defective and it needs to be RMAed? Are there any other common gotchas that might cause these symptoms? For reference, the components in question are: CPU: Celeron E1400 Motherboard: Gigabyte GA-G31M-ES2L Graphics card: TBC (a low end card from a couple of years ago; worked flawlessly before the mobo change) PCI WNIC: Edimax 7128G Thanks in advance for any help.

    Read the article

  • Adding gcc 4.9 as a compiler option in Xcode

    - by user2129150
    I asked this question on StackOverflow, but it's pretty much stagnant. Sorry if this is considered a repost/double-post. I just installed gcc 4.9 (with C11 support), and want to add it to Xcode 4.6.3's build options as a compiler option. I ran make and make install, and the packages are all there (under /usr/bin/gcc. Running gcc --version confirms that gcc 4.9 is installed rather than an older version. When I go into an existing Xcode project's build settings, the only compiler options available are Apple LLVM compiler 4.2 LLVM GCC 4.2 Other... Clearly, GCC 4.9 would have to be added using the "Other..." option, although I'm not sure how. I've tried inputting the path to gcc (/usr/bin/gcc), although the default value for other isn't a path at all: com.apple.compilers.llvmgcc42. I've also tried following the instructions from the answer to this question as well, but the machine I'm on doesn't have the /Developer directory at all, since I believe Apple integrated the developer tools that required (and created) this directory into Xcode. How do I add gcc 4.9 as a compiler option in Xcode?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • iperf max udp multicast performance peaking at 10Mbit/s?

    - by Tom Frey
    I'm trying to test UDP multicast throughput via iperf but it seems like it's not sending more than 10Mbit/s from my dev machine: C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [156] local 192.168.1.99 port 49693 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0- 1.0 sec 1.22 MBytes 10.2 Mbits/sec [156] 1.0- 2.0 sec 1.14 MBytes 9.57 Mbits/sec [156] 2.0- 3.0 sec 1.14 MBytes 9.55 Mbits/sec [156] 3.0- 4.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 4.0- 5.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 5.0- 6.0 sec 1.15 MBytes 9.62 Mbits/sec [156] 6.0- 7.0 sec 1.14 MBytes 9.53 Mbits/sec When I run it on another server, I'm getting ~80Mbit/s which is quite a bit better but still not anywhere near the 1Gbps limits that I should be getting? C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [180] local 10.0.101.102 port 51559 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [180] 0.0- 1.0 sec 8.60 MBytes 72.1 Mbits/sec [180] 1.0- 2.0 sec 8.73 MBytes 73.2 Mbits/sec [180] 2.0- 3.0 sec 8.76 MBytes 73.5 Mbits/sec [180] 3.0- 4.0 sec 9.58 MBytes 80.3 Mbits/sec [180] 4.0- 5.0 sec 9.95 MBytes 83.4 Mbits/sec [180] 5.0- 6.0 sec 10.5 MBytes 87.9 Mbits/sec [180] 6.0- 7.0 sec 10.9 MBytes 91.1 Mbits/sec [180] 7.0- 8.0 sec 11.2 MBytes 94.0 Mbits/sec Anybody has any idea why this is not achieving close to link limits (1Gbps)? Thanks, Tom

    Read the article

  • WEIRD netstat behavior on Windows XP re: www.partypoker.com

    - by tbone
    I really don't know if this is the right place to ask this, but I would really appreciate if someone that is more savvy on Windows XP (Professional) could help me out. For background, I am a 10+ years programmer, so I'm not a total idiot, but I am far from an expert on TCP/IP, etc, and this has me totally confused. When I do a netstat (on Windows XP) I seem to always get a huge amount of www.partypoker.com connections and I can't figure out where they are coming from. A netstat -o shows me that some are coming from PID xxx, which is firefox, but if I kill it, the connections still remain. Some are coming from PID 0, which makes no sense to me. SECOND PROBLEM: One would think you could edit the C:\WINDOWS\system32\drivers\etc\hosts file to block this, but it seems like my machine is ignoring the hosts file! (I have tried with the DNS client service both enabled and disabled, same result). So I just rebooted, killed all my normal programs, and I can't seem to reproduce the problem. If I was a paranoid person, I would think there was some sort of an intelligent trojan running. I am running Windows XP Pro, Kaspersky Antivirus, ccCleaner, and am fully up to date on Windows Update. What gives???? So, I guess my questions are: 1. Is anyone else seeing these wird connections to partypoker.com? 2. Why isn't my hosts filter working? 3. Is there some utility I can run to find out whats happening? I've tried autoruns.exe from sysinternals but don't see anything interesting. Am I the only one with this problem? If there are any additional things you need me to run, let me know.

    Read the article

  • Transparent proxying in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >