Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 565/715 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • VNC failure on Xen

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • ASP.NET sending email through exchange problem

    - by Solmead
    I have an exchange 2010 server running on Windows 2008 R2, I also have a remote webserver running Windows 2003 with multiple sites on it (all asp.net mvc 2 sites). I setup a Transport in exchange and all the websites on my remote web server can send email no problem to anyone in the exchange server and to any external domain. Now for my problem. I am having issues with that webserver, so I moved one of the websites to run on my exchange server, it runs well (low hit website) except that email doesn't work from that site. I tried changing the Transport in exchange to add the IP address of the local machine and the 127.0.0.1 addresses and it still isn't sending any email. Any ideas on how to get this working? The remote websites can still send email no problem, the version of the site that I had to move on the remote server can still email, but on the exchange server for that website email does not send. I would guess it is a Transport issue, since it is running on the same server a firewall shouldn't be the issue. I changed the smtp setting in web.config to localhost, and now I do receive email to my account on the exchange server, but I do not receive any emails on outside addresses. To add more description, this is a custom developed asp.net mvc 2 website. And no errors were being generated in the code when sending the email in either case.

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Can't login to phpMyAdmin on a WAMP server running Windows 2008

    - by Richard West
    I am setting up a new server. I have installed Apache 2.2.17, PHP 5.3.3, MySQL 5.1.53 and phpMyAdmin 3.3.8 running on a Windows 2008 (32 bit) OS. I have configured Apache and PHP so they appear to be working fine. I have created the standard test php page with the following code and everything appears to be working fine. <?php //index.php phpinfo(); ?> I also see the mySQL and mySQLi section in the above webpage, so it appears that that I have the proper extensions loaded for mySQL access. The problem that I am having centeres around myPHPAdmin. I have this installed and I can access to the login screen at http://localhost/pma I login using "root" and the password I have setup for root. After a delay of 30 seconds or so the web page goes to a blank screen, and the url is now http://localhost/pma/index.php?token= No error is ever displayed - however nothing usable is either. I have confirmed that mySQL is running by going to the command line and logging into mySQL from there. I have double checked my configuration but I am not having any luck getting this to work. I have also disabled the Windows firewall, but that did not change anything. I installed mySQL using the standard port 3306. Any advice would be greatly appreciated.

    Read the article

  • Archiving mails with postfix: how to filter mails?

    - by Tronic
    i wanto to implement the following scenario: we use a postfix mailserver. to archive all old and new mails, i want to setup a second postfix on our fileserver and create a single mailbox "archive". then every mail gets forwarded as bcc to this mailbox automatically. now, i want to create different folders in a maildir structure and let the server move each mail to the right subfolder of the mailbox based on its sender or receiver. e.g. when we get a mail to one of our employees named "John Doe" at [email protected], the mail should be moved to "Inbox/John Doe Incoming". the same applies when john doe sends a mail, folder would be "Inbox/John Doe Outgoing". how can i implement this filter behaviour. i heard of Procmail and Maildrop. Which of the two would you prefer? Which is more easy to configure? Any out-of-box solutions here? thanks in advance!

    Read the article

  • No discs found when trying to install Windows 8 with UEFI

    - by Sahas Katta
    I have a Vizio Notebook (CN15-A5). It came pre-installed with Windows 8 x64 and is taking advantage of UEFI out of the box. The BIOS (APTOS AMI) is in Secure Boot mode with the OS selected as "Windows 8". I removed the stock HDD that came with the machine and put my own SSD into it. I created a Windows 8 Pro x64 installation disc on a 4GB USB flash drive formated as FAT32 since its apparently required for UEFI. When I boot from the USB Win8 installation disc, I get suck when I reach the "Custom: Install Windows only" section. Normally you would see a list of available discs and their partitions, however my entire list is blank. If I head back to the BIOS and disable Secure Boot and set the OS to "Other OS" and attempt again, I am able to see the list of available discs in the system and can install a copy of Windows 8. Unfortunately, doing it in this method results in an installation with a traditional 350 MB partition + OS partition instead of 4 partitions which is normal for a UEFI setup. Has anyone run into this problem? I've tried loading defaults in the BIOS and attempting to install via every combination with no luck. Any help would be appreciated.

    Read the article

  • Fresh Proxmox VE 2.1 installation with defaults can't be reached or pinged

    - by Damainman
    I am using the lastest Proxmox VE 2.1. My server has two NICS with a uplink only connected into eth0. My Server is a co-located server utilizing public IPv4 IPs. It is not behind a firewall or any system which monitors traffic. Via IPKVM I did a fresh install of Proxmox, I put in the correct IP, Mask, Gateway, and DNS information. The install went perfectly fine with no errors. Upon completion and rebooting the system: I am unable to reach the web GUI via the browser, it just times out. I am unable to ping the server. I am unable to ping outside to the Internet from within the server. Tried pinging out to 4.2.2.2 and yahoo.com I tried rebooting the server and restarting the network service. IFCONFIG shows my IP information under vmbro0 which also has the same MAC address as the eth0 device. eth0 only displays a IPv6 Scope:Link address, which I did not setup myself. This is my first time installing proxmox, but after searching for a few hours it doesn't seem like anyone else is having the same issue as me from a fresh install with just the defaults. So far the only thing I did was install it. Also, I know the network cable is good and the IP is good because I was running a Xen XCP server with the same network settings prior to wiping it to install proxmox. Some additional information: for pveversion -v (Installed proxmox-ve_2.1-f9b0f63a-26.iso) pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a) running kernel: 2.6.32-11-pve proxmox-ve-2.6.32: 2.0-66 netstat -nr (note: .136 is my network, and .137 is my gateway) Destination - Gateway - Genmask xxx.xxx.xxx.136 - 0.0.0.0 - 255.255.255.248 0.0.0.0 - xxx.xxx.xxx.137 - 0.0.0.0 /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address xxx.xxx.xxx.138 netmask 255.255.255.248 gateway xxx.xxx.xxx.137 bridge_ports eth0 bridge_stp off bridge_fd 0

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • only port working with mod_proxy is 8009, trying to use with tomcat and httpd, dont know why

    - by techsjs2012
    I am trying to use mod_proxy with httpd and tomcat. If I leave tomcat ajp to run on 8009 in the server.xml of tomcat and in the httpd.conf of apache httpd everything works great but once I change it to anything else and restart them both it does not work.. I trieded 8109,8209 and 8019.. only thing that works is 8009? Below is my setup that works. <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> if I change the port to anything in here and the server.xml of tomcat it does not work but I can telnet the port so I know its up? below are the other libs settings I have LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so #LoadModule proxy_ftp_module modules/mod_proxy_ftp.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • How to make working TFTP server on CentOS 6.2

    - by Dima
    I'm trying to setup TFTP server on CentOS 6.2. The /etc/xinet.d/tftp configuration file is the following: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftpboot -vvv per_source = 11 cps = 100 2 flags = IPv4 } The selinux and firewall are disabled. The /etc/hosts.allow and /etc/hosts.deny files are empty. When I'm trying to get a file from the TFTP server, the file transfer always failed and I see the following errors into /var/log/messages Jul 11 03:16:53 localhost xinetd[4155]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jul 11 03:16:53 localhost xinetd[4155]: Started working: 1 available service Jul 11 03:17:00 localhost xinetd[4155]: START: tftp pid=4157 from=192.168.10.3 Jul 11 03:17:00 localhost in.tftpd[4158]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:00 localhost in.tftpd[4158]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:01 localhost in.tftpd[4159]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:01 localhost in.tftpd[4159]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:03 localhost in.tftpd[4160]: RRQ from 192.168.10.3 filename 1 The tftpboot directory permissions are (output of the ls -l command): drw-rw-rw-. 3 root root 4096 Jul 11 03:32 tftpboot I also see that the tftpboot directory is shown (by ls -l) with green background (unlike other files/directories) (Why? As I know the green background is for sticky bit only). What I did wrong? How can I make TFTP server working?

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • Very slow browsing shared folder XP client/host

    - by Ickster
    I have a pretty straightforward setup where I'm storing media files on an XP pro machine, and sharing the folder to be accessed by other XP pro machines around the house. (Typically, there's only one client accessing the share at a time, although there may be several with the share mounted.) It's been working just fine for years, but I've recently started having some problems. A couple of days ago, the host PC had power disconnected while it was running. It was restarted and everything seemed fine initially, but since then browsing the shared folder from client machines has been extremely slow and actually reading data is all but impossible. The problem exists in every access method I've tried: Windows Explorer, VLC dialogs, command line, etc. My first thought was that the disk was experiencing problems, but there are no problems viewing the files locally on the host machine. My second thought was that there was a network problem on the host machine, so I removed and reinstalled drivers for the NIC with no change. My third thought was that there might've been a problem elsewhere on the network, so I swapped out hardware to no avail. I'm regrouping and trying to come up with a methodical approach to figuring out what might be wrong. I would of course be thrilled if you can suggest specific problems (Microsoft KB articles, etc.) that I might check, but I'm not expecting a silver bullet. If you can help me outline an approach to identify the problem (including recommended tools, e.g., disk checkers, network analyzers, etc.) I'd greatly appreciate it.

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • What Wireless Router/ADSL Modem to get? N-band a must!!

    - by JJarava
    I'm looking for a Dual-N band Router OR ADSL Gateway and I'd like some recommendations. Situation: I have a 802.11b/g ADSL gateway provided by my telco, but the WIFI signal won't cover all the house (especially the living-room, so my tv-connected Mac Mini has poor to no internet access). So I'm looking to either replace the DSL modem with a N-enabled one, or to add a Router to the mix. I've had a modem+router setup for many years, and I know the advantatges (double NAT, double FW = more security) and issues (more complex to troubleshoot, two possible points of failure), so I'd rather live with a single (ADSL Gateway) device, if possible. Requirements: Dual-N Band (300 Mbs WIFI) 1 GB Ethernet ports ADSL2+ support (if it's a ADSL gateway, which would be desirable) "Best" range and speed possible Nice to have: USB port to share disks/printers on the network Media streaming I've been a long time user of Linksys, so googling around I found the WRT610N (http://www.linksysbycisco.com/US/en/products/WRT610N) for a "Pure Router" perspective, and it's one of those that Linksys styles "N++" (http://www.linksysbycisco.com/US/en/promo/Promotion-Go-Wireless?stepname=Promotion-Step-Go-Wireless-High-Performance) But I haven't been able to find similar "ADSL" gateways. I've found the WAG320N, but there is little to no info in the Linksys site (i.e., i don't know if it's Dual Band, or if it has GB ethernet) Any opinions/recommendations of other products/suggestions are more than welcome.

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • Webmin ADSL module

    - by expatcm
    I was wondering if the Webmin ADSL module is going to help me solve a problem .... but I cannot find any documentation telling me what the module does ..... Any ideas? What I am hoping is that it will solve a problem .... I am just in the process of setting up a Debian server. I will use the DHCP server as part of the Debian setup to manage the lan IP addresses. I want to turn off the external DHCP server which is part of the Linksys ADSL modem / router and use just the modem. The challenge I have is knowing what I need to do in order to get the public DNS on the eth1. When I turn off the DHCP on the modem / router not a lot happens apart from no longer being able to access the settings .......... So I am looking at this Webmin module and wondering if it is to manage the ADSL connection and find the public DNS address .... The local DHCP server is working well for the lan, I am just stuck for the external DNS.

    Read the article

  • Solaris 10: cannot ping to/from server

    - by anurag kohli
    All, I have a Solaris 10 server which is not reachable by IP (ie can't ping to/from the server). I believe I have the default route setup correctly. See below: # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.62.100 netmask ffffff00 broadcast 192.168.62.255 ether 0:14:4f:b1:9b:30 # netstat -rn Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 192.168.62.0 192.168.62.100 U 1 40 bge0 224.0.0.0 192.168.62.100 U 1 0 bge0 default 192.168.62.1 UG 1 0 127.0.0.1 127.0.0.1 UH 1 4 lo0 # # cat /etc/defaultrouter 192.168.62.1 I have verified layer1 and layer 2 are up on the switchport, and that it's on the correct VLAN. I have also checked the default gateawy (192.168.62.1) is in fact reachable since I can ping it from my PC: Pinging 192.168.62.1 with 32 bytes of data: Reply from 192.168.62.1: bytes=32 time=1ms TTL=254 Reply from 192.168.62.1: bytes=32 time=1ms TTL=254 Reply from 192.168.62.1: bytes=32 time=3ms TTL=254 Reply from 192.168.62.1: bytes=32 time=6ms TTL=254 I'm at a loss as to what is wrong. I would highly appreciated your assistance. Thank you very much.

    Read the article

  • Changes to grub in ubuntu 10

    - by jdege
    I've been running CentOS 5 for some years. I've decided to upgrade to Ubuntu, and with 10.04 just out, this seemed like a good time. I'm a tad paranoid, so I started off with a new set of drives - one to install on, one to backup to, and one as a spare. I removed my existing CentOS 5 drives, and did an install, and had no problems. I installed the server version, and used the default full-disk LVM installation. Next, I copies my backup scripts over, edited them to work with the new configuration, and did a test backup. That worked fine, as well. Then comes the real test, could I do an install of the backup onto the spare drive? (I won't put anything of importance on a system that doesn't have a reliable backup, and if I've never done a restore, it's not reliable.) I booted from a System Rescue CD (ver 1.5.3), with the spare drive as /dev/sda, and the backup drive as /dev/sdb. I had no trouble in partitioning, configuring LVM, formatting, making swap, or restoring the file systems. But when I got to restoring grub to the MBR, I ran into problems. My restore instructions from CentOS 5 said run grub, then enter two commands: root (hd0,0) setup (hd0) The first command exits with an error: "Checking if /boot/grub/stage1 exists ... no" I did some googling around, and found that the Grub2 included in recent Ubuntus is very different than the Grub 0.97 included in CentOS 5. One site suggested I use: grub-install --root-dir=/mnt/restore /dev/sda That appeared to work, but when I booted from the drive, I ended up at a grub prompt. Any ideas as to what I need to do? It seems like a simple problem, but my attempts at searching out answers on the web are being swamped by references to the old version of Grub. Help would be appreciated.

    Read the article

  • Apache is spawning more and more processes!!

    - by erotsppa
    We have a LAMP setup that is working prety good for half a year. All of a sudden today the apache server (mysql servers are not on this box) started to die. It seems to have started to spawn more and more processes over time. Eventually it will consume all the memory and the server would just die. We are using prefork. In the mean time what we are doing is just added more ram and increased the MaxClients and ServerLimit parameter to 512. We're just prolonging the crash. The number still goes up slowly. Maybe in a day, it would reach that limit. What is going on? We only have around 15-20 request per second. We have 1Gb memory and it's not half used, there's no swapping going on. Why is apache creating more and more processes? It's almost like theres a leak somewhere! The database boxes are fine, they are not causing a delay to requests. We tested some queries everything is quick!

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >