Search Results

Search found 20087 results on 804 pages for 'css3 multiple backgrounds'.

Page 648/804 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Macbook Pro won't boot from DVD with SSD

    - by Adam Carr
    Here's the timeline of events. Had a running MBP 17 Early 2011 Thunderbolt with OWC Mercury Extreme Pro SSD 115GB drive. Installed Windows 7 via bootcamp. I have done this multiple times before and every time I need to format the bootcamp partition before installing. I think this time I actually deleted the partition and then selected the freespace to install. This worked fine for the most part but I wasn't able to boot the boot camp partition using vmware fusion. I gave up and used the boot camp assistant to revert back to one mac partition. I was getting some odd behavior so I rebooted the machine. It then came up with a message saying no bootable partiton. This made me think (and still does) that the windows install using the free space versus the boot camp partition caused the windows MBR boot loader to get installed incorrectly and mucked up the OS X installation. Ok, fine, I can just reinstall. I can't seem to boot from the original MBP installation DVD. I hold down c on boot but I never get past the all grey screen. I hear the DVD drive spin up but it eventually stops. I put the original HD back in it and everything works fine but when I put the SSD in, I can't boot from the DVD drive. I have already set up an RMA with OWC to send back the drive but considering the order of events, I feel as though it isn't a hardware issue but can't seem to figure out how to fix it. I can always send it back in but figured I would check and see if anyone could offer some guideance/assistence before doing so.

    Read the article

  • How do I get the F1-F12 keys to switch screens in gnu screen in cygwin when connecting via SSH?

    - by Mikey
    I'm connecting to a desktop running cygwin via SSH from the terminal app in Mac OS X. I have already started screen on the cygwin side and can connect to it over the SSH session. Furthermore, I have the following in the .screenrc file: bindkey -k k1 select 1 # F1 = screen 1 bindkey -k k2 select 2 # F2 = screen 2 bindkey -k k3 select 3 # F3 = screen 3 bindkey -k k4 select 4 # F4 = screen 4 bindkey -k k5 select 5 # F5 = screen 5 bindkey -k k6 select 6 # F6 = screen 6 bindkey -k k7 select 7 # F7 = screen 7 bindkey -k k8 select 8 # F8 = screen 8 bindkey -k k9 select 9 # F9 = screen 9 bindkey -k F1 prev # F11 = prev bindkey -k F2 next # F12 = next However, when I start multiple windows in screen and attempt to switch between them via the function keys, all I get is a beep. I have tried various settings for $TERM (e.g. ansi, cygwin, xterm-color, vt100) and they don't really seem to affect anything. I have verified that the terminal app is in fact sending the escape sequence for the function key that I'm expecting and that my bash shell (running inside screen) is receiving it. For example, for F1, it sends the following (hexdump is a perl script I wrote that takes STDIN in binmode and outputs it as a hexadecimal/ascii dump): % hexdump [press F1 and then hit ^D to terminate input] 00000000: 1b4f50 .OP If things were working correctly, I don't think bash should receive the escape sequence because screen should have caught it and turned it into a command. How do I get the function keys to work?

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Debian network bridge configuration - /etc/network/interfaces

    - by Mathias
    I'm running a Lenny Xen dom0 hosting multiple virtual machines in a routed IP setup. To get an additional private subnet, I created the bridge xenbr0 in the dom0 with the following commands: brctl addbr xenbr0 ifconfig xenbr0 10.0.0.1 netmask 255.255.255.0 ifconfig xenbr0 up This works as expected, and domU interfaces are added to the bridge by Xen on VM start. My only problem is: how the heck do i specify this configuration in /etc/network/interfaces that it remains permanent and the bridge is available after a reboot? I tried the following config as found on a lot of tutorials: auto xenbr0 iface xenbr0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 bridge_stp no I get 2 different errors, depending on if the bridge already exists or not. If it doesn't exist: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). SIOCSIFADDR: No such device xenbr0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device xenbr0: ERROR while getting interface flags: No such device xenbr0: ERROR while getting interface flags: No such device Failed to bring up xenbr0. done. And if it exists: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.000000000000 no root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). RTNETLINK answers: File exists Failed to bring up xenbr0. done. Could anyone point me in the right direction please? The bridge works fine when created manually, i just need the right config file entries. The most tutorials I found add some devices to the bridge in the config, is that maybe the problem why it is not working? I don't have any interfaces I want to add to the bridge on creation as they get added later on VM start... Thanks, Mathias

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Email bouncing sent from Google

    - by davidmck
    I'm hoping someone here has an idea of where to look next. We have a domain we support which has email sent to it from one particular user bounce with the following message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 Unrouteable address (state 14). We only have reports of bouncing form this one particular user (who is someone we don't support - except they'd like to be able to contact our customer and we're trying to figure out if the problem is on our end). Many people can successfully send to this domain and the user who is getting bounce messages can send to other domains that we support (so it's clearly something specific with the princetonscoop.com domain and not our setup in general). I've reviewed the MX records multiple times and the server logs don't show a connection which generates this error (in fact this error is not one that our mail server would ever return). So it appears that google is contacting a different mail server for some reason. I have tested sending from my gmail account and that works. I believe the sender is using a google-apps account (the account they are using is from their own domain, not a gmail account). Any ideas on what might be happening here or what to test/investigate next? Thanks.

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • Win8/7/XP print spooler not getting along with Zebra ZT230 via WIFI

    - by Jonathan M
    I have a graphics-intensive 4"x6" label I'm printing to the ZT230. I'm printing multiple (10) copies. When connected via USB, all goes well. However, when connected via wifi, I only get 2 of the labels. A wireshark capture shows that at some point in the process my computer (presumably my windows spooler) is sending a reset packet, which, I believe, would pretty much kill the print job. I'm getting the same results on Win8, Win7 and WinXP. The print job was originally generated on Zebra's ZebraDesigner2 software. For easier diagnosis, I captured it to a .prn file. The .prn file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLLTF5bUJVT0lESUU/edit?usp=sharing And the wireshark capture file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLTGpSS0ktZW1xV28/edit?usp=sharing And the printer configuration listing: https://docs.google.com/document/d/1zh1Tw4D4yNa2uljOIL1kO2z8se9HK859irpUEwyxlyY/edit?usp=sharing I've started a discussion with Zebra Tech Support, and they're working on it, but I thought I'd toss it out here for more ideas since we're getting kind of stumped. Any ideas why this may be happening?

    Read the article

  • Parallel Environment (PE) on Sun Grid Engine (6.2u5) won't run jobs: "only offers 0 slots"

    - by Peter Van Heusden
    I have Sun Grid Engine set up (version 6.2u5) on a Ubuntu 10.10 server with 8 cores. In order to be able to reserve multiple slots, I have a parallel environment (PE) set up like this: pe_name serial slots 999 user_lists NONE xuser_lists NONE start_proc_args /bin/true stop_proc_args /bin/true allocation_rule $pe_slots control_slaves FALSE job_is_first_task TRUE urgency_slots min accounting_summary FALSE This is associated with the all.q on the server in question (let's call the server A). However, when I submit a job that uses 4 threads with e.g. qsub -q all.q@A -pe serial 4 mycmd.sh, it never gets scheduled, and I get the following reasoning from qstat: cannot run in PE "serial" because it only offers 0 slots Why is SGE saying "serial" only offers 0 slots, since there are 8 slots available on the server I specified (server A)? The queue in question is configured thus (server names changed): qname all.q hostlist @allhosts seq_no 0 load_thresholds np_load_avg=1.75 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH INTERACTIVE ckpt_list NONE pe_list make orte serial rerun FALSE slots 1,[D=32],[C=8], \ [B=30],[A=8] tmpdir /tmp shell /bin/sh prolog NONE epilog NONE shell_start_mode posix_compliant starter_method NONE suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists NONE xuser_lists NONE subordinate_list NONE complex_values NONE projects NONE xprojects NONE calendar NONE initial_state default s_rt INFINITY h_rt 08:00:00 s_cpu INFINITY h_cpu INFINITY s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY,[A=30g], \ [B=5g]

    Read the article

  • Windows - Website unaccessible only on windows pcs in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening? Update: All of the computers on the network running Windows (any version) are showing this problem now. The website is still working under linux and mac osx. So, it has to be related to some kind of windows update (although I haven't installed any on one computer in the past week, which I've set to do manual updates only)..

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Performance Test and TCP tuning

    - by Mithir
    We are in the process of performance testing an application which receives tcp requests converts them to soap requests (WCF-httpBinding) which other services work on. The server is Windows Server 2008 R2. The TCP requests are received by TcpListener instance (.NET C#). There are 3 http-binded WCF services running on the same server. We have built a performance test client which goal is to simulate multiple concurrent requests(each request has to be different and recognizable by the application). We built a test running 150 requests that run on the same time (by 150 different threads), and we noticed straight away that some requests get the TCP connection slowly, but once they get it, they act fast. A single request writes twice on the same connection- request and an application ack. Although a single request+ack can take about 150ms, the 150 test takes about 7 seconds. The Problem When we try to run this test from 2 different computers we lose requests. some clients requests are getting no connection was made because the target machine actively refused it So I got here and got convinced it was because of the backlog. I changed the TcpListener parameters and did the registry AFD backlog changes written here but it still didn't work, so I inserted all of the TCP tuning suggested plus some netsh commands which were recommended, but still no change, we still get that error. Is there anything else I need to know? Are there any other solutions?

    Read the article

  • apache 2.4 redirect within virtualhost

    - by user129545
    I have a couple http (port 80) vhosts that I want to redirect to http if an https request is made to them. Apparently some things have changed with Apache 2.4 (NameVirtualHost not used like it was in the past, etc). Apache 2.4 on centos 5.5, This is all using a single ip for all vhosts below, I don't have multiple ip's on this box, my /usr/local/apache2/conf/extra/httpd-vhosts.conf : # <VirtualHost www.dom1.com:80> ServerName www.dom1.com ServerAlias dom1.com DocumentRoot /usr/local/apache2/htdocs/dom1/wordpress </VirtualHost> <VirtualHost webmail.dom2.com:443> ServerName webmail.dom2.com DocumentRoot /usr/local/apache2/htdocs/webmail SSLEngine On SSLCertificateFile /usr/local/apache2/webmail.crt SSLCertificateKeyFile /usr/local/apache2/webmail.key </VirtualHost> # my /usr/local/apache2/conf/extra/httpd-ssl.conf, # Listen 443 SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 Mutex default SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect builtin SSLCryptoDevice builtin # webmail.dom2.com works fine. Problem is I can connect to https://www.dom1.com, and it serves up the content from webmail.dom2.com. I want any https requests for www.dom1.com on port 443 to simply redirect to http://www.dom1.com on port 80. Thanks

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • KVM guest VLAN aware problems

    - by baraka
    Hi, We are using Centos 5.5. as KVM host. It has two nics. One for management and the other one for services. As we have services in multiple vlans this nic is configured as a 802.1Q trunk. Any VM must be able to have access to any vlan, so host trunk interface is bridged to its tap interface and vlan is configured inside VM. Everything works fine while there is not heavy traffic. I can not find any log on guest or host, but, after some certain sustained big file transfer (about 6Gb) bridging stop working. Other guest on the same host continue working without problems. tcpdump on bridge interface is Ok, but on guest tap inferface I can see only outgoing traffic. Restarting bridge or rejoining tap interface doesn't provide any clue. Rebooting guest turns on bridge again. Bridge configuration is minimal: just addbr and addif (no stp). Any idea welcome!

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >