Search Results

Search found 95301 results on 3813 pages for 'server client'.

Page 499/3813 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • poorman's redandunt domain name server array

    - by John
    Can I configure my domain, example.com's name servers as: ns1.dyndns.com ns2.dyndns.com ns1.opendns.com ns2.opendns.com That is, combining free dns services to create a redundant name server array? Note these name servers from different companies are not aware other companies' name servers also serve our domain. In case one company, say, ns1(2).dyndns.com is down, will people experience interruption when visiting my example.com? If one name server is unreachable, the next name server will be tried, or?

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

  • Change a Munin server and keep the data

    - by Khelben
    We are migrating some servers, and we need tp change our Munin server. Most of the Munin nodes are not changed, and we would want to keep track of the historical data, if possible. I can set up a new Munin server, but I like to know if it's possible to transfer the old data to the new server, and how to do it.

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Server App - Users

    - by Benno
    I'm using mac OSX lion server, and I'm trying to setup a user account with access to different services. I had to create the user in Prefs users and groups, because it wouldn't work in the Server app. In the server app, it kept saying "This operation couldn't be completed". I managed to create the user in the system prefs, but now I'm back in the server app and cannot update the services the user has access to. The checkboxes are all there for e.g. Address book, file sharing etc, but I can't deselect any of them... Any ideas on why I can't change a user's access to services?

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • Moving a Domain Controller VM from one server to another

    - by Mike
    I have a Hyper-V Virtual Machine that is a Domain Controller, specifically it is our main DC and holds all 5 FSMO roles. If I wanted to move this Virtual Machine to another VM Server than the one it is on currently, is it as simple as taking the .VHD, moving it to another server, and creating a VM in Hyper-V on the new server for it? Or are there other things to consider that could get screwed up from doing this? Thanks

    Read the article

  • Hyper-V SERVER R2 Cluster

    - by ToreTrygg
    I keep hearing and reading that Hyper-V Server 2008 R2 can be clustered, but I can't find out how to set this up. I can find how to cluster Windows Server 2008 R2 with Hyper-V. I understand how to do this, bBut "How do you set up Hyper-V SERVER 2008 R2 in a cluster"? Thanks.

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Can't use HTTPS with ServerXMLHTTP object

    - by Imraan
    I am supporting a Classic ASP application that connects to a payment gateway via HTTPS. Up until recently there have been no issues. A few days ago this broke without the code, IIS config or anything local changing. Its broken on at least 3 separate servers. The last run of Windows Updates was in late November, but bringing the servers' updates up date has not resolved the problem. A code snippet is below. Dim oHttp Dim strResult Set oHttp = CreateObject("MSXML2.ServerXMLHTTP") oHttp.setOption 2, 13056 oHttp.open "POST", SOAP_ENDPOINT, false oHttp.setRequestHeader "Content-Type", "application/soap+xml; charset=utf-8" oHttp.setRequestHeader "SOAPAction", SOAP_NS + "/" & SOAP_FUNCTION oHttp.send SOAP_REQUEST Below is a dump of the error object :- Number: -2147012852 Description: A certificate is required to complete client authentication Message: A certificate is required to complete client authentication I initially posted the question on Stackoverflow (http://stackoverflow.com/questions/9212985/cant-use-https-with-serverxmlhttp-object) thinking it was a code issue, but further investigation seems to point to a server issue.

    Read the article

  • Windows Server 2003: Nat Port Forwarding Not Working

    - by jM2.me
    The setup is following: Internet (108.99.XXX.XX) <- Windows Server 2003 (10.0.0.1) <- Switch <- Office Computers (10.0.0.100-200 some static routes some manual some automatic) Windows Server has NAT installed on it and two network interfaces are configured properly. The problem is, whenever I try to forward port 80 (or any other) from office computer (lets say 10.0.0.100), it fails. Nic #1 Settings: All settings are obtained from ISP Nic #2 Settings: Set manually IP: 10.0.0.1 Mask: 255.255.255.0 Nat Server is configured to automatically assign IP addresses to private network. Settings are: IP: 10.0.0.0 Mask: 255.255.255.0 Forwarding was done in Routing and Remote Access (local) - IP Routing - NAT/Basic Firewall - Local Area Connection (right click_properties) - services and ports - Web Server HTTP - Private Address: 10.0.0.100 SO what is causing the problem of failure to forward any port from other computer inside private network?

    Read the article

  • Little server for a little network: Asus TS Mini + Windows Home Server or other?

    - by microspino
    Hello, I'm building a little network with 4 computer a plotter, 3 printers an a HP Fax/Scan/Copy/Print machine. The network is connected to the web by a D-Link router/firewall. I would like to add little server to have file sharing (reachable via VPN from outside by a single user-road-warrior), automatic backups. I would like to avoid big electrics bills or, to say It in a green way, I would like to avoid useless power consumption made by a big server with hot-plug, Raid, double PSU because I don't need all this stuff. The server must work 24/7 and the budget is between 400€ and 1000€. I'm evaluating the Asus EEE Box (with XP), a TranquilPC (with XP), a FitPC (again with XP), and an Asus TS Mini (with Windows Home Server). I like the latter but since I'm Europe I don't know where I can buy It. Do you have any suggestion/experience to share with me about: which hardware do I have to buy ? which O.S. ?

    Read the article

  • Looking for a user friendly file upload service to run on my own server

    - by Joe Mako
    My goal is to allow people to upload and download large files through their web browser with a simple user interface and user password/account management. Yousendit offers a service like what I am looking for, but it does not make sense to use their service if I already have a web server. Is there an open source version of Yousendit that I can run on my own web server? What is this type of software or service called? (calling it an FTP Server does not seem to fit)

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • Network driver for Hyper-V restore from Windows Home Server

    - by Philipp Schmid
    I have backed up Windows Server 2008 running virtualized on Hyper-V to a Windows Home Server 2008 SP1 (I know I should have backed up the VHD instead). Now I need to restore the contents of the VM from WHS. I have created a restore CD ISO and used it to create a new VM. It all works as advertised up to the point where the restore process wants to load the network drivers (it only finds 4 disk drivers on the restore CD. but no network drivers). So I created a virtual floppy and copied the contents of 'Home Server Drivers for Restore onto it. But no luck! I have tried moving the 4 subdirectories into the root of the floppy, but that didn't work either. Finally, I started another instance of the WS 2008 to identify the network driver that the virtualized instance is using (%WINDOWS%\system32\drivers\netvsc60.sys) and copied that file onto the virtual floppy, without success. Does anyone have any suggestions on how to get networking working on a Hyper-V instance running off the Windows Home Server Restore CD? UPDATE: As suggested by delenda, I have added a legacy network adapter to my VM, and indeed I now get a network driver listed! However, the WHS it still not found, even after entering the home server name manually. PHS

    Read the article

  • How to configure Microsoft Internet Information Server for use with the tomcat container

    - by Debabratta
    I have an webapplication written mainly in jsp and servlet and I use tomact7.0.26 as my application server. I want this application to run by IIS though I can run it using tomcat. I searched in the web that I have to map the index.jsp to the IIS script directory. So I want that when jsp request comes to IIS server, it forwards it to tomcat server. So please tell me the steps for required configuaration. Thankyou.

    Read the article

  • Exchange and external mail server

    - by Bahrain Admin
    Hi, We have our domain hosted externally with Network Solutions, and our mail server is running from there. We have a branch office in Bahrain and have 3 users who would like to use their email addresses on a local exchange server running at the Bahrain office. The Exchange server is currently only used for internal mail,contacts and calendaring. I've used a third=party program to download the POP3 mail to their exchange accounts. so they can receive mail from outside. The issue is in sending mail using their external addresses. I've setup their exchange accounts to include their external address. But we get an error message stating that the IP is not authorized. I tried putting the local ISP as the smart host, but then we get an error message stating that the Address was rejected. I tried using our own external mail server as the smart host, but then the message "Relaying is denied" comes up. Any suggestions? Thanks Arun

    Read the article

  • Getting FreeNX client to work on Mac OS again.

    - by Fantomas
    This problem is not uncommon, but I have not seen a solution that would work for me. Keyboard mapping is completely screwed up - e.g. typing 'damn it' gives me '1cxngw'. All machines have QWERTY keyboards and are set up to use US. [Client] Mac OS version: Version: 10.5.8, Build: 9L30 [Client] Kernel version uname -a Darwin <comp name> 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 [Client] FreeNX Client version: 3.4.0-8 [Client] MacPorts version: MacPorts 1.8.2 [Client] The X Windows System: XQuartz 2.5.0 (xorg-server 1.7.6) [Server] OS: Ubuntu 9.04 [Server] Kernel: uname -a Linux <comp.name> 2.6.28-18-generic #60-Ubuntu SMP <date> x86_64 GNU/Linux [Server] Other info: please ask for it but do tell me how to query/look for it. Thanks!

    Read the article

  • Shell Script to Start Mysql Server if not running

    - by user103373
    I have written a shell script to start mysql server & send a mail to admin user if it's restarted via shell script. What i am facing an issue if I run this shell script on terminal it's work perfectly & If same script runs via cronjob it's only sending the mail to the user & problem remains same. Is this problem relates to permission & how can i resolve it. Shell Script-------- #!/bin/bash EMAIL="[email protected]" SERVICE='mysql' if ps ax | grep -v grep | grep $SERVICE > /dev/null then echo "$SERVICE service running, everything is fine" else echo "$SERVICE is not running" /etc/init.d/mysql start cat <<EOF | msmtp -a gmail $EMAIL Subject: "Alert (Test Server) : Mysql Service is not running (Manually Restarted)" Mysql Server Restarted at: `date` EOF EXIT I am using msmtp for sending mail to the user on ubuntu 12.04 Server.

    Read the article

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • Essential Business Server

    - by Dan
    We are trying to run the Preparation Wizard for Essential Business Server 2008 and receive the following error. "A list of servers could not be collected from Active Directory Domain Services (AD DS). Ensure that your network is functioning correctly and that this computer can access AD DS." Our network is an Windows Server 2003 network with all of the services running fine. I can't run this from either a machine or a server. Any idea where we should start?

    Read the article

  • Make download dialog show up for pictures in IIS 6.0

    - by LinuxGnut
    I have a client that is offering picture downloads from their site. They want the user to have to download the picture, rather than having it appear in the browser when the link is clicked. Inserting the text "Right-click to save as" or something similar isn't an option with this client, as everything needs to be done their way. Now, I know I could accomplish this task in PHP or ASP, but I'd rather not have to write an addition to Magento to accomplish this. So is there a way in IIS 6.0 (Server 2003) to send the correct headers for image formats in that it will make a download dialog pop up?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >