Search Results

Search found 38126 results on 1526 pages for 'running'.

Page 161/1526 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • Why does Task Scheduler NOT re-run successfully completed tasks

    - by Teo
    I am using Task Scheduler on Windows 2008 x64. I have 3 tasks, running every night on different times without overlapping. It works for some days - usually 2-3 up to 10 (it's really random), then it stops running the tasks. When I look at the history, I see that the tasks completed successfully. In the UI, the column "Next Run Time" stays empty. The tasks are set to run on background; the account for running them is a domain one - it is valid and enabled. When I check with Process Explorer, there are no left-over processes associated with my tasks. I am completely baffled at what's going on.

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • How to perform diagnostics (stress test) on HP Smartarray Controller

    - by pepoluan
    At my office, we have a server that we suspect its RAID controller (HP Smartarray) is failing. A cold boot, however, does not indicate anything. Can anyone recommend me a method to stress-test the controller? Symptoms that makes me suspect a failing controller: Disk access getting slower, queue getting longer Running dmesg on the XenServer console I see many messages similar to this one: end_request: I/O error, dev tda, sector 253655584 (the sector number is never the same) When we move the VM to another physical host, we no longer see the above message Running idle (without any running VM), the dmesg no longer emit the above message A search on Google indicated that the above message is most commonly associated with a failing SmartArray controller. How can I be sure that the SmartArray controller is failing?

    Read the article

  • Lighttpd with FastCGI won't create /tmp/fcgi.sock on startup?

    - by Marlon
    I'm running lighttpd-1.4.19 on a debian 5 box and try to run web2py with fastcgi. The problem with that is, that lighttpd does not create the socket file /tmp/fcgi.sock. If I'm creating the file by myself touch /tmp/fcgi.sock lighttpd will start but will throw this error after some time running: unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: unix:/tmp/fcgi.sock My config looks like this: fastcgi.server = ( "/handler_web2py.fcgi" = ( "handler_web2py" = ( #name for logs "check-local" = "disable", "socket" = "/tmp/fcgi.sock", "idle-timeout" = 20, "max-procs" = 1 ) ), ) Is there any known problem with running lighttpd on debian 5? Thanks for any help. I have pasted the whole lighttpd config: http://pastie.org/1660646

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • 32bit Application Memory Usage on 64bit Windows 7

    - by Brian
    I have an early 2012 Macbook Pro with and Intel I7 processor and 16 gigs RAM running Windows 7 Professional 64bit via Bootcamp. I work in Geographical Information Systems as a programmer so most of the applications I am running are 32bit Applications, but tend to use a lot of resources (i.e. ArcGIS, SQL Server Express, Visual Studio, etc.). I have been noticing that when I have multiple instances of either the same 32bit application or different 32bit applications and they are all working on hefty processing tasks, I am still only topping out at about 30% memory use. I understand 32bit applications are limited to less than 4gb RAM, but I assumed that one instance could use its own 4gb while another instance could use another 4gb to take full advantage of all the memory I have installed. Can anyone explain how this works and how I can get my applications to take advantage of all my memory via running multiple instances?

    Read the article

  • Multiple servers using same nameservers

    - by Robsimm
    I have 2x servers with the following OS's and Control Panels: Windows Server 2003 running Plesk 9 Centos 5.3 running WHM/cPanel The Windows server 2003 server is hosting the two nameservers: ns1.domain.com ns2.domain.com I have BIND running on the Centos 5.3 server, but I wish for my customers to use the same nameservers ns1.domain.com and ns2.domain.com (as per the Windows Server 2003 server). My first question is - is this possible? And if so, how would I go about configuring both servers to enable such a configuration? Thanks very much.

    Read the article

  • Adding MX records to DNS

    - by Teddy
    Let's say I have a computer A which is running postfix, computer B is running tinydns. On domain project.domain.com I'm running httpd server and other server with DNS (tinydns) have entries like =project.domain.com:1.2.3.4:86400 and +project.domain.com:1.2.3.4:86400 where 1.2.3.4 is correct addres for server which runs httpd server. I also have a postfix mail server on 1.2.3.5 which I'd like to work with this domain (project.domain.com). I'm afraid that if I add another alias like +project.domain.com:1.2.3.5:86400 to tinydns configuration - it could break. How that entry should look like? Thank you for any hints.

    Read the article

  • Connecting to SQL Server on Parallels Desktop with PHP

    - by Zen Savona
    well I recently bought a Mac and am using it as my primary computer. Because I am required to work with MSSQL via PHP, I have installed Parallels Desktop and run Server 2008 R2 on it. I am using the same mixed mode authentication which I previously had on windows. When I attempt to connect to the server with PHP using either a new test file or my old code, it just doesn't find the server. I have tried running PHP on the XP install with parallels, and using the hostname as COMPUTERNAME\SQLEXPRESS, LOCALIP\SQLEXPRESS localhost localip etc, PHP never finds the server. Also note that I can connect to the database server using Management Studio without problems, so SQL Server is running. Please note that both PHP and MSSQL are running within the virtualised environment. Any contribution is appreciated

    Read the article

  • Emacs doesn't load gui.

    - by D Connors
    Hi, whenever I run emacs or emacs23 on terminal I just get the following output: ** (emacs:2620): CRITICAL **: menu_proxy_module_load: assertion `dbusproxy != NULL' failed And the gui doesn't load, and emacs' window never opens. The emacs process doesn't actually crash (the terminal stays busy, and I can see the emacs23 process running with ps -e). I've tried running it with the -D --debug-init arguments, but the same thing happens and the output is exactly the same. However, if I run emacs -nw it successfully runs emacs in terminal mode as if nothing were wrong. Strangely, this problem only started happening the second time I ran emacs today. The first time it worked perfectly fine. Since then, I've tried rebooting and I've tried purging the emacs installation, to no success. I haven't installed any new packages today, but I might have upgraded some, could that be the reason? Is there a way to find out which packages were installed/upgraded today? Thanks I'm running Ubuntu Lucid

    Read the article

  • Chef bootstrap giving 401 unauthorized

    - by loddy1234
    I'm trying to bootstrap a new chef node by running: knife bootstrap <server ip> -x lewis -N gitlab --sudo But I get the following output: [Mon, 03 Sep 2012 14:45:17 +0000] INFO: *** Chef 10.12.0 *** [Mon, 03 Sep 2012 14:45:17 +0000] INFO: Client key /etc/chef/client.pem is not present - registering [Mon, 03 Sep 2012 14:45:17 +0000] INFO: HTTP Request Returned 401 Unauthorized: Failed to authenticate. Ensure that your client key is valid. [Mon, 03 Sep 2012 14:45:17 +0000] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out [Mon, 03 Sep 2012 14:45:17 +0000] FATAL: Net::HTTPServerException: 401 "Unauthorized" My chef server is running Ubuntu 12.04 x32 and the machine I'm trying to bootstrap is running CentOS 6.3 x64 Any idea what's going wrong?

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • How complex of a daemon should be run through inetd?

    - by amphetamachine
    What is the general rule for which daemons should be started up through inetd? Currently, on my server, sshd, apache and sendmail are set up to run all the time, where simple *NIX services are set up to be started by inetd. I'm the only one who uses ssh on my computer, and break-in attempts aren't a problem because I have it running on a non-standard port, and my HTTP server gets maybe 5 hits a day that aren't GoogleBot. My question is, what are the benefits vs. the performance hits associated with running a complex daemon like sshd or apache through superserver, and what, if any successes or failures have you had running your own daemons in this manner?

    Read the article

  • Error accessing other groups files in apache

    - by Shashank Jain
    I am using Cloud9 IDE on my server, which creates files with default permission 640. As a result when I try to open those file via HTTP, apache shows permission denied error. When IDE is running as root user, files created belong to root:root. Also, when I see as what user is apache running, all its processes are shown to be running as root user. I cannot understand why still it cannot access files. I know if I add apache's user to group of file owner, it will work. But, I don't know which user to add. PS: I don't want to change permission of each file I create. I want less troubling solution.

    Read the article

  • Why did my laptop turn off?

    - by darenw
    Normally I can slip my running laptop into a backpack, go somewhere, and if it's no more than about half an hour later, it'll still be running. At the destination I plug in the AC power unit and all is well. I run it off of the AC unit before and after the trip, have the screen at less than full backlight brightness, and don't have any peripherals that burn power. Sometimes the wireless switch accidentally slides in the backpack, and that causes extra power to be used and the laptop dies before I reach the destination. Sad, but so be it. But sometimes the wireless switch is off, I've reached the destination in less than 30 minutes (typically 10-20 min), and I know the battery was fully charged, yet the machine is off. Is there a way to determine, after the fact, why the machine shut itself off? I'm running Linux on a fairly powerful Gateway with 4GB RAM, fancy nvidia graphics, dual core cpu, chosen more for number crunching power than battery life, but should last easily for half an hour if not an hour.

    Read the article

  • Remote desktop session ends abruptly with a "protocol error"

    - by Jon
    Intermittently we get a problem where a remote desktop session will get disconnected with the error message “Because of a protocol error, this session will be disconnected. Please try connecting to the remote computer again.” We are getting this with one server only which is running Windows Server 2008, connecting with Windows 7 clients. The session itself stays running, you just get disconnected, and you can try and reconnect. Sometimes you get in for a while then it will kick you out. We are connecting from Windows 7 clients. We have tried connecting using Cord on a Mac and this works fine, so it's not like the session itself is corrupted. One problem is that there are some critical applications running under the session (I know, let's not discuss the idiocy of that), so we cannot reset the session in any way during the working day – so any diagnostics must have minimum impact. Thanks, Jon

    Read the article

  • Anyone had any issues getting a disk to start on a Walrus storage sytem?

    - by Peter NUnn
    Hi folks, I'm trying to get a Eucalyptus system up and running and have managed to get the cloud controller and node controller running fine, with an instance running in the cloud system, but without any persistent storage. When I try and create a volume I get euca-create-volume -s 10 -z cluster1 VOLUME vol-5F5D0659 10 creating 2010-05-31T09:10:11.408Z but when I try and see the volume I get euca-describe-volumes VOLUME vol-5F5D0659 10 cluster1 failed 2010-05-31T09:10:11.408Z VOLUME vol-5FE9065E 10 cluster1 failed 2010-05-31T09:02:56.721Z I've dug all over the place, but can't seem to turn up a reason the creation would fail or where to start looking to see what the issue might be. Anyone have any ideas where to even start looking for the answer to this? Ta Peter.

    Read the article

  • MSDTC on server x is unavailable

    - by Fishcake
    I have Windows Server 2003 running in a virtual machine, running some software that is trying to update a database within transactions on my Windows 7 machine (the host for the VM). On my host I have edited the settings for Local DTC by selecting the following Client and Administration Allow Remote clients Allow Remote administration Transaction manager communication Allow inbound Allow outbound No authentication required However when I try to run the software I receive this error: MSDTC on server 'x' is unavailable. Whilst searching for fixes most just suggest making sure the service is running which I have. Cheers!

    Read the article

  • "sh: /usr/sbin/xenstored: not found" - But it's there?

    - by Matt H
    What would cause running the file /usr/sbin/xenstored to print sh: /usr/sbin/xenstored: not found However, the file /usr/sbin/xenstored is there and is not a symbolic link. Actually I should be running this as root. That prints a similarly odd message. sudo: unable to execute /usr/sbin/xenstored: No such file or directory By the way, xenstored is not a script, it's an ELF executable. My guess is that it's because I haven't gotten all the dependent libraries installed. However, I would expect it to say something like this: ./xenstored: error while loading shared libraries: libxenctrl.so.4.0: cannot open shared object file: No such file or directory Which is true of running xenstored on a system that doesn't have all the required libraries. Why do I get "not found" vs the much more useful "cannot open shared object file"?

    Read the article

  • OS X can't resolve localhost suddenly

    - by Conor
    Last week I fired up a website that I'm currently developing locally only to find out that it wasn't working as it was the night before, (or at all). After an inital stage of panic and 'what did I do' moments... I deduced the problem down to the fact that my OS X now wont resolve localhost properly, so connections to my SQL database were failing. I can still ping localhost in the terminal, but in order to get my websites up and running again, I had to change all the localhost entries to 127.0.0.1 This isn't a huge problem as everything is up and running again, but I would like to try to get to the bottom of it. I have a sneaking suspicion that an apple software update caused this issue, as I don't recall doing anything else that would have had any effect. Other than my hosts file (which looks normal), what else could be causing this? Running OSX 10.6.4

    Read the article

  • Where does linux look for shared libs?

    - by EsbenP
    I am trying to get the ar command on an embedded ARM computer running linux. I want to install debian and openjdk. It is a headless system. This is a custom linux distribution provided by the hardware manufacturer. The debian installer is missing the ar command so i tried copying the binaries from the debian package, but when running ar I get error while loading shared libraries: libbfd-2.18.0-multiarch.20080103.so: cannot open shared object file: No such file or directory libbfd is also in the package. I tried linking it to /lib and /usr/lib but I get the same message when running. What is the best way to get debian and ar on a custom linux distro?

    Read the article

  • Cannot ssh anymore into FreeBSD 7.2 home server

    - by Gabi
    Somehow sshd stopped running and no amount of start, restart or onestart will make it go again. I normally ssh into it from a dual-boot laptop computer that shows up on the network as gabi-buntu when running Ubuntu Karmic, and as gabi-pc when running Windows XP Pro. Neither my Putty connection nor the Linux terminal can establish a ssh link anymore. Upon rebooting the server, I am greeted with "/etc/rc: WARNING: run_rc_command: cannot run /usr/sbin/sshd". In addition, a message will appear saying things like rpc.statd: failed to contact host gabi-buntu RPC: port mapper failure RPC: timed out Everything else works fine. The FreeBSD 7.2 box runs a print server, a Samba server, and an Apache server for the home wiki, via https. It also serves up NFS shares for Linux clients. Any suggestions? Thank you, Gabi Huiber

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >