Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 309/1664 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Mac Mini server (10.6) behind router with FQDN hostname

    - by thechriskelley
    I have a Mac Mini running Mac OS 10.6.6 Server that will be part of a local network, and a static IP from my ISP. I'd like to set up DNS for the Mini with a FQDN as the hostname (example.com) properly. The Mini is behind a router (Apple Airport Extreme) and is given a private, static IP address. I can't assign it the public static IP directly because it's behind a router with DHCP/NAT for other machines on the local net. My end goal here is for services to resolve to the server properly from outside and inside the local network to users via example.com (and subdomains like mail.example.com, www.example.com), which will point to the public static IP assigned to the router. Will DNS work/resolve properly (for mail services and other subdomains) if it has a private ip address, but the necessary services are forwarded properly through NAT? I'm open to any (hopefully better) suggestions, as my current setup doesn't seem like it's the best way. Currently, more hardware or another public static IP is not possible. With the current setup, it seems as though one static IP is not necessary anyway. Thanks in advance for any insight.

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Munin "Disk usage" is too high?

    - by f-aminov
    I've recently installed munin on my VMware client server and saw that the Disk usage shows about 80-90%. Everything else (cpu load, ram, etc.) seems to be running fine. I have only two virtual hosts on my server with 1000 users/day in total, so I don't think that's too much. Here is the graph for the disk usage. Server info: Debian Lenny, CPU 510Mhz, RAM 512MB Is it bad? What could possibly cause this? Thank you for any suggestions.

    Read the article

  • Can you upgrade OEM Office with an OEM Upgrade

    - by LuckyLindy
    We have a bunch of computers at work that have OEM Office 2000. We have all the material, CDs, etc., and amazingly the computers still work well (they were top of the line when purchased in 2002). However, we'd like to upgrade to Office 2003, our corporate standard. We've found OEM Office 2003 upgrade software online for ~$60 apiece, which would save us thousands over installing retail upgrades or volume licenses. But can we do this? I haven't been able to get a clear answer from Microsoft or anyone else if OEM Upgrades can be applied by non-System Builders to OEM Office.

    Read the article

  • compile software with older version of gcc and linux kernel

    - by ant2009
    Distributor ID: SUSE LINUX Description: openSUSE 11.4 (x86_64) Release: 11.4 Codename: Celadon gcc (SUSE Linux) 4.5.1 Linux linux-14ay 2.6.37.6-0.20-desktop #1 SMP PREEMPT 2011-12-19 23:39:38 +0100 x86_64 x86_64 x86_64 GNU/Linux Hello, I am trying to install software on the above system. However, the software that I need requires an earlier version of gcc (version 4.1) my current install version is 4.5.1. It is possible to install an 4.1 on my current system? Where would I get the gcc version from? Also, I get this message about the Linux kernel The current kernel version (2.6.37.6-0.20-desktop) is later than the version currently supported by this software (2.6.5) Is it possible to install this earlier kernel. Where would I get that from? Many thanks for any suggestions,

    Read the article

  • Should I remove all unmanged switches from my network?

    - by IMAbev
    I have a office network with approx 150 devices (computers/printers/ip phones). Almost all of the end point devices have straight run Cat5e from my Hp switches directly to the device. IP phones for each end user sit between the hp switch and computer. (hp switch - in to phone out of phone in to computer) While I have gotten rid of most of them, I still have a couple linksys unmanaged switches that split off to 2 or more devices. I have heard various reasons for removing these switches but not entirely sure if it actually degrades the network. I agree that eliminating these switches certainly cleans up the network from a management standpoint. Are these unmanaged switches bad for my network? Is there a clear advantage to having 'home run' cable runs to each of my end point devices?

    Read the article

  • Slapd service won't start, unable to open pid file

    - by Foezjie
    I'm trying to set up a test LDAP-server for some developers but I'm running into some trouble. service slapd start errors so I run /usr/sbin/slapd -d 1 and this gives me the following error at the end: unable to open pid file "/var/run/ldap/slapd.pid": 13 (Permission denied) slapd destroy: freeing system resources. slapd stopped. The rights for /var/run/ldap are as follows: root@pec:/var/run/ldap# ls -ld drwxr-xr-x 2 openldap openldap 60 2012-07-04 20:45 So I don't get why there is still a permission denied. Syslog gives the following when running slapd: Jul 4 21:00:27 pec slapd[13758]: @(#) $OpenLDAP: slapd 2.4.21 (Dec 19 2011 15:40:04) $#012#011buildd@allspice:/build/buildd/openldap-2.4.21/debian/build/servers/slapd Jul 4 21:00:27 pec kernel: [8147247.203100] type=1503 audit(1341428427.953:64): operation="truncate" pid=13758 parent=20433 profile="/usr/sbin/slapd" requested_mask="::w" denied_mask="::w" fsuid=0 ouid=119 name="/var/run/ldap/slapd.pid" Can anyone point me in the right direction?

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

  • Exchange 2007 to 2010 public folder replication error 1129

    - by Keith
    I currently upgrading from an Exchange server 2007 to 2010. I have moved all mailboxes and OAB. I am having issues replicating the public folders. This is the error I'm getting in the event log on the 2007 box: Error 1129 occurred while processing a replication event. Folder: (6-11ED8367F0C) IPM_SUBTREE\Marketing\Marketing I have looked online and everything about these errors seems to relate from an old 2003 server. Well, we never had a 2003 server. I'm really not sure what to do at this point. Any help?

    Read the article

  • qemu command not running directly

    - by Dr. Death
    Can I use "qemu://localhost/system " command directly inplace of "virsh -c qemu://localhost/system " command if my both machines are physically connected in a network as virsh will simply creating the virtual shell between two systems? can I use ssh in place of virtual shell here? I tried this but system gives no directory or file name for qemu even when i had qemu installed properly in my system. but when i use virsh i did not get such errors. Do i need to open any unix socket for doing this?

    Read the article

  • How to manage credentials on multiserver environment

    - by rush
    I have a some software that uses its own encrypted file for password storage ( such as ftp, web and other passwords to login to external systems, there is no way to use certificates ). On each server I've several instances of this software, each instance has its own password file. At the moment number of servers is permanently growing and it's getting harder and harder to manage all passwords on all instances up to date. Unfortunately, some servers are in cegregated network and there is no access from them to some centralized storage, but it works vice versa. My first idea was to create a git repository, encrypt each password with gpg and store it there and deliver it within deployment system, but security team was not satisfied with this idea and as it is insecure to store passwords in repository even in encrypted view ( from their words ). Nothing similar comes to my mind. Is there any way to implement safe and secure password storage with minimal effort to manage all passwords up-to-date? ps. if that matters I've red hat everywhere.

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    Hi there, there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Configuration issue with HttpRealipModule (CloudFlare) in nginx configuration file

    - by Tyrx
    I've been attempting to use HttpRealipModule with the CloudFlare IP range in my main nginx configuration file but upon restarting nginx I'll just get a standard `"configuration file /etc/nginx/nginx.conf test failed" and my site will go down. This is what I've been attempting to do with my nginx.conf; user www-data; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { # Basic Settings set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; real_ip_header CF-Connecting-IP; client_max_body_size 50m; client_header_timeout 5; keepalive_timeout 5; port_in_redirect off; sendfile on; server_tokens off; server_name_in_redirect off; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; # Gzip Settings gzip on; gzip_disable "msie6"; gzip_min_length 1400; gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml application/xml+rss; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } What's wrong with that configuration file?

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Rack layout for future growth

    - by bleything
    We're getting ready to move to a new colo facility and I'm designing the rack layout. While we have a full rack, we only have 12U worth of hardware right now: 1x 1U switch 7x 1U servers 1x 2U server 1x 2U disk shelf The colo facility requires us to front-mount the switch and use a 1U brush strip, so we'll be using a total of 13U of space. Regarding growth, I'm reasonably sure we'll be adding another 4U in servers, 1-2U of network gear, and 2-4U of storage in the mid-term. Specific questions I'm hoping to get help with: where should I mount the switch? the LEDs are on top... should I group the servers by function with space for adding new machines? as an alternative, should I group servers based on whether they are production or staging? where in the rack should I start? in the middle? at the top? at the bottom? equally spaced? Here's a silly little ASCII diagram of what I'm thinking right now. Please feel free to tear my design apart, I've really no idea what I'm doing :) Any advice is very welcome. edit: to be clear, the colo is providing redundant power with UPS and generator, so that's why there's no power gear in the plan, except for the 0U PDU that I didn't diagram. 42 | -- switch ---------------------- 41 | -- brush strip ----------------- 40 | ~~ reserved for second switch ~~ 39 | ~~ reserved for firewall ~~~~~~~ 38 | 37 | -- admin01 --------------------- 36 | 35 | -- vm01 ------------------------ 34 | -- vm02 ------------------------ 33 | ~~ reserved for vm03 ~~~~~~~~~~~ 32 | ~~ reserved for vm04 ~~~~~~~~~~~ 31 | ~~ reserved for vm05 ~~~~~~~~~~~ 30 | 29 | -- web01 ----------------------- 28 | -- web02 ----------------------- 27 | ~~ reserved for web03 ~~~~~~~~~~ 26 | ~~ reserved for web04 ~~~~~~~~~~ 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | -- db01 ------------------------ 15 | +- disks ----------------------+ 14 | +------------------------------+ 13 | ~~ reserved for more ~~~~~~~~~~~ 12 | ~~ db01 disks ~~~~~~~~~~~~~~~~~~ 11 | 10 | +- db02 -----------------------+ 9 | +------------------------------+ 8 | ~~ reserved for db02 ~~~~~~~~~~~ 7 | ~~ disks ~~~~~~~~~~~~~~~~~~~~~~~ 6 | ~~ reserved for more ~~~~~~~~~~~ 5 | ~~ db02 disks ~~~~~~~~~~~~~~~~~~ 4 | 3 | 2 | 1 |

    Read the article

  • How can I prevent OpenVPN from clobbering local route?

    - by ataylor
    I have a local network on 192.168.1.0 with netmask 255.255.255.0. When I connect to a VPN though OpenVPN (as a client), it pushes a route for 192.168.1.0 that clobbers the existing one, making my local network inaccessible. I don't to access anything on 192.168.1.0 on the remote machine; I'd like to just ignore it, while accepting the other routes that are pushed. My client is Ubuntu 10.10. How can I skip the one offending route?

    Read the article

  • USB Thumb Drive not recognized in Hyper-V Manager

    - by Vazgen
    I have the free, standalone core Hyper-V Server 2012 running on my physical machine. I set up remote management from my Windows 8 client. When I proceed to create a virtual machine I would like to install the OS from a usb thumb drive but it is not recognized in Hyper-V Manager on my client (when the USB is plugged into the physical server) nor is it recognized in Server Manager under File and Storage Services Volumes Is there a role needed to recognize external usb flash drives? Because I think this standalone version is just core Hyper-V role and that's it... but this is such a basic functionality. Can anybody comment.

    Read the article

  • postfix specify limited relay domain while allowing sasl-auth relay

    - by tylerl
    I'm trying to set up postfix to allow relaying under a limited set of conditions: The destination domain is one of a pre-defined list -or- The client successfully logs in Here's the relevant bits o' config: smtpd_sasl_auth_enable=yes relay_domains=example.com smtpd_recipient_restrictions=permit_auth_destination,reject_unauth_destination smtpd_client_restrictions=permit_sasl_authenticated,reject The problem is that it requires that BOTH restrictions be satisfied, rather than either-or. Which is to say, it only allows relaying if the client is authenticated AND the recipient domain is @example.com. Instead, I need it to allow relaying if either one of the requirements is satisfied. How do I do this without resorting to running SMTP on two separate ports with different rules? Note: The context is an outbound-use-only (bound to 127.0.0.1) MTA on a shared web server which all site owners are allowed to relay mail to one of the "owned" domains (not server-local, though), and for which a limited set of "trusted" site owners are allowed to relay mail without restriction provided they have a valid SMTP login.

    Read the article

  • Necesity of ModSecurity if Apache is behind Nginx

    - by Saif Bechan
    I have my Apache installed behind Nginx. So every request that comes in is first handeled by Nginx. If there is dynamic content needed the request is send to Apache which listens on port 8080. Pretty basic reverse proxy setup. Now with this setup the first entry point is Nginx. Is it still needed to install ModSecurity to protect Apache against unwanted request. Or should I just focus on protecting Nginx as this is the first entry point. All suggestions are welcome.

    Read the article

  • NFSv4 "Too many levels of symbolic links" error

    - by user1434058
    Both machines are running Ubuntu 12.04 Remote NFSv4 Client $ ls /mnt/storage/aaaaaaa_aaa/bbbb/cccc_ccccc gives this error: ls: reading directory .: Too many levels of symbolic links How can I fix this? When error occurs ls start listing the files, however PHP brakes. On the NFSv4 Server In /etc/fstab: /mnt/storage /srv/storage none bind 0 0 In /etc/exports /srv 192.168.1.0/24(rw,async,insecure,no_subtree_check,crossmnt,fsid=0,no_root_squash) /srv/storage 192.168.1.0/24(rw,async,nohide,insecure,no_subtree_check,no_root_squash) ERROR root@ds:root@ds:/mnt/storage/foreign_dbs/imdb/imdb_htmls# ls -l | head ls: reading directory .: Too many levels of symbolic links total 10302840 -rw-r--r-- 1 root root 10484 Jul 5 13:56 0019038.gz -rw-r--r-- 1 root root 16264 Mar 30 00:31 0259701.gz -rw-r--r-- 1 root root 13784 Mar 30 14:20 1000000.gz -rw-r--r-- 1 root root 12741 Mar 30 13:04 1000003.gz -rw-r--r-- 1 root root 12794 Mar 30 12:40 1000004.gz -rw-r--r-- 1 root root 13123 Mar 30 12:07 1000005.gz -rw-r--r-- 1 root root 13183 Mar 30 12:04 1000006.gz -rw-r--r-- 1 root root 13443 Jul 4 01:16 1000007.gz -rw-r--r-- 1 root root 12968 Mar 30 11:05 1000008.gz I came across it in PHP. scandir would return 1612577.gz & 1612579.gz, but skips 1612578.gz and yet the file types and properties are identical on them and this only happens on the nfs client, works 100% on the server

    Read the article

  • how to make bridge networking with KVM work in Fedora19

    - by netllama
    I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM. My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow? I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration: # brctl show bridge name bridge id STP enabled interfaces br0 8000.38eaa792efe5 no em2 vnet1 br1 8000.38eaa792efe6 no em3 br2 8000.38eaa792efe7 no em4 vnet0 virbr0 8000.525400db3ebf yes virbr0-nic # more /etc/sysconfig/network-scripts/ifcfg-em2 TYPE=Ethernet BRIDGE="br0" NAME=em2 DEVICE="em2" UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd ONBOOT=yes HWADDR=38:EA:A7:92:EF:E5 NM_CONTROLLED="no" # more /etc/sysconfig/network-scripts/ifcfg-br0 TYPE=Bridge NM_CONTROLLED="no" BOOTPROTO=dhcp NAME=br0 DEVICE="br0" ONBOOT=yes # ifconfig em2 ;ifconfig br0 em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 1000 (Ethernet) RX packets 100093 bytes 52354831 (49.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25321 bytes 15791341 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7e00000 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.31.99.226 netmask 255.255.252.0 broadcast 10.31.99.255 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 0 (Ethernet) RX packets 19619 bytes 1963328 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 1074 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem): <interface type='bridge'> <mac address='52:54:00:26:22:9d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> # ps -ef | grep qemu qemu 1491 1 82 13:25 ? 00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I can provide additional information, if requested. thanks!

    Read the article

  • Tomcat service start issue

    - by Srinivasan
    Hi, I have installed tomcat as service successfully. when i start this from control Panel- administrative tools - services, it tows error like this, The Apache Tomcat 4.1 service on local computer started and then stopped. Some service stop automatically if they have no work to do, for example, the performance logs and alerts sevice. Wat is the error?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >