Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 773/1027 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • How can I connect integrated webcam with virtualbox

    - by Mike Stumpf
    I am trying to use a Windows XP VM for VirtualBox on my Windows 8.1 laptop. I have tried the usual attaching USB device but I get an error saying "USB device is busy with previous request". My webcam is not active in any applications and this happens after a clean reboot of the host, the guest, and VirtualBox. Here are the details: Host -HP Pavilion 17 Notebook PC (stock) -Windows 8.1 -AMD A10-5750M APU -HP Truevision HD (integrated webcam) VM I got the VM here: http://www.modern.ie/en-us/virtualization-tools VirtualBox -VirtualBox 4.3.12 installed -VirtualBox Extension pack installed -Guest additions are installed for 4.3.12 -Enable USB Controller is checked -It does not matter if enable 2.0 controller is checked or not -It does not matter if a USB device filter is set up for the webcam or not -Here is the error message: Failed to attach the USB device DDFEQ01G45BFBV HP Truevision HD [0004] to the virtual machine IE8 - WinXP. USB device 'DDFEQ01G45BFBV HP Truevision HD' with UUID {7a2e2a45-974d-482b-9b4e-9f9abbcd0ebb} is busy with a previous request. Please try again later. Result Code: E_INVALIDARG (0x80070057) Component: HostUSBDevice Interface: IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee: IConsole {8ab7c520-2442-4b66-8d74-4ff1e195d2b6} I read on some VirtualBox forums that disabling USB 2.0 support in the host BIOS solved their issue but I wanted to know if there were any other ideas before I muck around in there. Thanks

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Gmail.com detect mail as spam, but the server is not on any BlackList

    - by Tomer W
    I have an issue with Google. (GMail to be exact) About 1 month ago, we had a security breach, and mail was relayed through our servers. we got listed in almost ALL Black-Lists :( we fixed the problem, and requested removal from Black-lists, which was granted easily. currently (over 3 weeks), we are not sending any spam anymore. furthermore, we got clear from all the Black-lists (MxToolBox Black-List Search Result) But, GMail still refuse to get Anything from the server, stating '550 Spam'. Following, Telnet attempt to send to gmail: 220 mx.google.com ESMTP g47si45436208eep.123 helo megatec.co.il 250 mx.google.com at your service mail from: <[email protected]> 250 2.1.0 OK g47si45436208eep.123 rcpt to: <[email protected]> 250 2.1.5 OK g47si45436208eep.123 Data 354 Go ahead g47si45436208eep.123 Test123 . 550-5.7.1 [62.219.123.33 11] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. g47si45436208eep.123 Connection to host lost. i tried filling the form @ Gmail - Report Delivery Problem i also tried reaching Google by phone, but the message was to go to the Link mentioned above. I Checked ReverseDNS and is ok... We dont have TLS, but that shouldn't be a problem, shouldn't it? Note: we are not a Bulk sender. Anyone has an idea? what can be blocking our IP? Anyone know whom can be contacted in order to resolve this BL listing?

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • pgpoolAdmin keeps going straight back to it's login page

    - by user705142
    I'm trying to run pgpoolAdmin through nginx - it seems to be working properly, at least initially. I've gone through the initial set-up, which works fine, but now after logging in every link takes me straight back to the login page. It also shows japanese text instead of english, despite picking english in the installation. It seems to me just as if it was unable to save any user data, session information etc. I have javascript/cookies enabled, so it's not that. The ownership of the folder is nginx, and so too is pgmgt.conf.php, so it shouldn't be a problem with permissions. One potential issue is that I can't seem to see any confirmation that php postgresql support is enabled in the php info screen, despite the correct package installed and in the config line. Any ideas as to what's happening here? The nginx rules are pretty standard: server { # pg-pool admin listen 997; server_name localhost; root /opt/pgpooladmin; index index.php; location ~ .php$ { fastcgi_pass_header Set-Cookie; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • Windows 7 64-bit installation from alternative media (no DVD/USB Flash drive)

    - by Niels Willems
    Greetings I currently have Windows 7 x86 installed on my computer. I want to install Windows 7 x64 on a different partition on my computer. However there is a little issue, I cannot run the x64 install from Windows 7 x86 which I currently have. I was planning to Install Windows 7 x64 on another partition to then boot up from that partition to install it on the partition I actually want my OS on. Once that is complete I could just format the partition from the Windows 7 x64 that I didn't need anymore. But the installer will not run from the x86 version of Windows 7 even though I do not want to upgrade that Windows directly. The reason I'm doing this in such a weird way is that my optical drive is broken and I'm really not into buying a new one since I would use it like once every year or so. I also don't have a USB Flash Drive which is big enough to hold the installation files. As far as I'm aware I cannot use an external hard drive such as this one, which I do have. Are there any alternatives in which I can install Windows 7 x64 or am I forced into buying a USB Flash Drive or new optical drive? Thank you in advance for your replies. Edit: This picture shows my current partitions on my laptop. I want to get Windows 7 x64 on the C partition but have to install it first on the F partition to then boot up the F partition windows to format C and install x64 on that one. My external drive is J. Edit 2: No alternative computer which has a DVD drive, install files are located on an iso from MA3D. To install my 32 bit version I mounted the ISO in Daemon Tools to replace my Windows Vista but since I cannot run 64 bit into my 32 bit OS this doesn't work.

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • App-V Problems After SCCM 2007 SP2 Upgrade

    - by GAThrawn
    We're running an SCCM (MS System Centre Config Manager, successor to SMS) 2007 environment and delivering a number of applications to clients virtualized using App-V 4.5.1. The App-V apps are delivered by SCCM in Download-and-Execute, not streaming mode. The SCCM environment was recently service packed to SCCM 2007 SP2 (amongst other things this gives Win7 support). We also pushed out the updated SCCM clients to our workstations. This seems to have broken file associations for virtual apps for a large number of our users. The users can still open their App-V apps by finding the specific app on their Start Menu and clicking it's icon, but double-clicking an associated file in Explorer, or opening an email attachment gives the "This action is only valid for installed applications" error. There is a Technet blog entry from the App-V team talking about this issue "Upgrade to ConfigMgr 2007 SP2 may break App-V File Type Associations" but running the script there comes back saying "The User Interface option has been updated" but hasn't seemed to fix the problems for any of our users. Unfortunately the Technet blogs don't seem to have comments switched on so you can't see how this has worked out for other people. Anyone else had this problem, have you found any other way to fix it?

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Remote Desktop *from* Windows 2008 R2 Server

    - by freefaller
    Summary: how do I create an RDC connection from a Windows 2008 server to another server? Our client will only allow us to connect to their server via a static IP address (which is fair enough), but unfortunately as we're a very small company we don't have one in the office. As a work around, we had the connection working through our old Windows 2003 server (dynamic-cloud from 1and1). .. however we have just rebuilt the server to run under Windows 2008 R2 (don't ask, but it was necessary), and now I simply cannot get the connection working. I have added an "Outbound Rule" to Windows Firewall with Advanced Security (TCP, All local ports, 3389 remote port - I have also tried the other way around). I have added a packet filter IP security rule with the same details. The 1and1 firewall rules (through their online control panel) allows for 3389 TCP and UDP. But it is simply not connecting (yes, the server is definitely on and able to accept connections) with the general error of... Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Is there anything obvious I've missed - or something I can use to find out where the request is being blocked? The new server is using the exact same IP address as before, so I don't believe that would be an issue. Unless it's trying to use an IPv6 address rather than the old IPv4 address that it was before? I apologise that I am not a network person by trade, but I know more than anybody else in my office!!

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders.

    Read the article

  • Fresh Proxmox VE 2.1 installation with defaults can't be reached or pinged

    - by Damainman
    I am using the lastest Proxmox VE 2.1. My server has two NICS with a uplink only connected into eth0. My Server is a co-located server utilizing public IPv4 IPs. It is not behind a firewall or any system which monitors traffic. Via IPKVM I did a fresh install of Proxmox, I put in the correct IP, Mask, Gateway, and DNS information. The install went perfectly fine with no errors. Upon completion and rebooting the system: I am unable to reach the web GUI via the browser, it just times out. I am unable to ping the server. I am unable to ping outside to the Internet from within the server. Tried pinging out to 4.2.2.2 and yahoo.com I tried rebooting the server and restarting the network service. IFCONFIG shows my IP information under vmbro0 which also has the same MAC address as the eth0 device. eth0 only displays a IPv6 Scope:Link address, which I did not setup myself. This is my first time installing proxmox, but after searching for a few hours it doesn't seem like anyone else is having the same issue as me from a fresh install with just the defaults. So far the only thing I did was install it. Also, I know the network cable is good and the IP is good because I was running a Xen XCP server with the same network settings prior to wiping it to install proxmox. Some additional information: for pveversion -v (Installed proxmox-ve_2.1-f9b0f63a-26.iso) pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a) running kernel: 2.6.32-11-pve proxmox-ve-2.6.32: 2.0-66 netstat -nr (note: .136 is my network, and .137 is my gateway) Destination - Gateway - Genmask xxx.xxx.xxx.136 - 0.0.0.0 - 255.255.255.248 0.0.0.0 - xxx.xxx.xxx.137 - 0.0.0.0 /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address xxx.xxx.xxx.138 netmask 255.255.255.248 gateway xxx.xxx.xxx.137 bridge_ports eth0 bridge_stp off bridge_fd 0

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • Strange windows boot problem - My computer won't boot from hard disk

    - by user29779
    Hello, Until yesterday, I had windows XP installed on my computer. After installing the OS, and setting the BIOS to boot from the hard-disk first, I discovered a strange problem - the OS wasn't booted and the only way I could get it to load was to insert a windows installation disk, enter repair mode, do "fixboot" and restart. The problem only occured when the computer was booted after shut-down. If I only restarted it, everything worked fine. Yesterday, I upgraded my XP to win7 and the problem persists. I tried the same "trick" I did with XP to get it to load, by entering repair and doing "bootrec /fixmbr" and "bootrec /fixboot" but that didn't work (and when I run "scanos" it didn't find any windows installations). Eventually, I got it to load by changing the settings in the BIOS to boot from CD first and HD second, removing the installation disk from the drive, letting it fail to boot from CD and then re-insert the disk. Anyone has any idea what may be the cause or how can I investigate this issue? Thanks! Marina

    Read the article

  • postfix is unable to send emails to external domains

    - by BoCode
    Whenever i try to send an email from my server, i get the following error: Nov 13 06:37:21 xyz postfix/smtpd[6730]:connect from unknown[a.b.c.d] Nov 13 06:37:21 xyz postfix/smtp[6729]: warning: host X.com[x.y.z.d]:25 greeted me with my own hostname xyz.biz Nov 13 06:37:21 xyz postfix/smtp[6729]: warning: host X.com[x.y.z.d]:25 replied to HELO/EHLO with my own hostname xyz.biz Nov 13 06:37:21 xyz postfix/smtp[6729]: 2017F1B00C54: to=<[email protected]>, relay=X.com[x.y.z.d]:25, delay=0.98, delays=0.17/0/0.81/0, dsn=5.4.6, status=bounced (mail for X.com loops back to myself) this is the output of postconf -n: address_verify_poll_delay = 1s alias_database = hash:/etc/aliases alias_maps = body_checks_size_limit = 40980000 command_directory = /usr/sbin config_directory = /etc/postfix connection_cache_ttl_limit = 300000s daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 1 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 default_delivery_slot_cost = 2 default_destination_concurrency_limit = 10 default_destination_recipient_limit = 1 default_minimum_delivery_slots = 3 default_process_limit = 10000 default_recipient_refill_delay = 1s default_recipient_refill_limit = 10 disable_dns_lookups = yes enable_original_recipient = no hash_queue_depth = 2 home_mailbox = Maildir/ html_directory = no in_flow_delay = 0 inet_interfaces = all inet_protocols = ipv4 initial_destination_concurrency = 100 local_header_rewrite_clients = mail_owner = postfix mailq_path = /usr/bin/mailq manpage_directory = /usr/share/man master_service_disable = milter_default_action = accept milter_protocol = 6 mydestination = $myhostname, localhost.localdomain, localhost, $mydomain mydomain = xyz.biz myhostname = xyz.biz mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases non_smtpd_milters = $smtpd_milters qmgr_message_active_limit = 500 qmgr_message_recipient_limit = 500 qmgr_message_recipient_minimum = 1 queue_directory = /var/spool/postfix queue_run_delay = 300s readme_directory = /usr/share/doc/postfix.20.10.2/README_FILE receive_override_options = no_header_body_checks sample_directory = /usr/share/doc/postfix.2.10.2/examples sendmail_path = /usr/sbin/sendmail service_throttle_time = 1s setgid_group = postdrop smtp_always_send_ehlo = no smtp_connect_timeout = 1s smtp_connection_cache_time_limit = 30000s smtp_connection_reuse_time_limit = 30000s smtp_delivery_slot_cost = 2 smtp_destination_concurrency_limit = 10000 smtp_destination_rate_delay = 0s smtp_destination_recipient_limit = 1 smtp_minimum_delivery_slots = 1 smtp_recipient_refill_delay = 1s smtp_recipient_refill_limit = 1000 smtpd_client_connection_count_limit = 200 smtpd_client_connection_rate_limit = 0 smtpd_client_message_rate_limit = 100000 smtpd_client_new_tls_session_rate_limit = 0 smtpd_client_recipient_rate_limit = 0 smtpd_delay_open_until_valid_rcpt = no smtpd_delay_reject = no smtpd_discard_ehlo_keywords = silent-discard, dsn smtpd_milters = inet:127.0.0.1:8891 smtpd_peername_lookup = no unknown_local_recipient_reject_code = 550 what could be the issue?

    Read the article

  • Need to Remove Exchange 2003 Server That Crashed During Transition to 2010

    - by ThaKidd
    As the title stated, we were running an Exchange 2003 server that we knew was going down soon so we purchased a second server and installed Exchange 2010 into the AD. We managed to move all of the mailboxes off of 2003 and also managed to get the Offline Address Book setup on 2010. At this point the 2003 server bit the dust and will no longer boot. Therefore we were unable to properly uninstall Exchange and remove the last 2003 server so it still exists in AD. As far as the clients are concerned, everything is working properly. However, when I run the Microsoft Exchange Profile Analyzer, I still see the old server and its Administrative Group. I am going to guess that since the old server is showing up in AD, I will not be able to raise Exchange or AD functionality (as the 2003 server was also the only AD DC) levels. I have forced the 2003 DC out of AD so that is no longer an issue. Old Setup: Windows 2003 Server Enterprise & Exchange 2003 Standard New Setup: Windows 2010 Server Enterprise & Exchange 2010 Standard Two Questions: How do you go about manually forcing the 2003 server and its administrative group out of AD? When that is finished, where do you raise the Exchange mode (can't find this for the life of me)?

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • Pure Terminal Server 2003 system hangs

    - by Donovan
    System Profile: IBM Server hardware Windows Ent Server 2003 running terminal services. No Citrix. Max user load: 30 Symptoms: While in the course of normal operation our terminal server will hang up. As far as I can tell there isn't any trigger that's plainly visible yet. The way in witch it hangs may give a clue though. The hang presents its self by users not being able to initiate any new processes. All processes currently loaded into memory work just fine. Example: outlook 2007. You may continue to read email, operate the client and such. Some people don't even realize the hang has occurred for a bit of time. My attempts to troubleshoot have been futile. Reacting to the hang does no good because I can't start any new processes to investigate. After I reboot the server my next instinct would be to begin logging to catch the hang occurring but I'm not sure what to log. Right now I'm attempting to keep process explorer running in case the issue occurs again. Sometimes it happens twice a day, other times, once a week. Anyone have any ideas on how I could set myself up for better success in tracking this problem down? Thanks, Donovan

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >