Search Results

Search found 98447 results on 3938 pages for 'sql server denali'.

Page 1513/3938 | < Previous Page | 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520  | Next Page >

  • php rsync with exec() not working

    - by mojeime
    Why this: rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder works from cronjob and this: exec("rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder"); doesn't work. I know exec is working because i have a few places in my appp that do convercion from pdf to jpg with ImageMagick (exec). SOLVED exec is working OK it was a permission issue on remote server. "Local" server is shared reseller account and remote server is my first VPS Ubuntu 10.10 LAMP box. If only I had a system administrator since i'm just a software developer forced to do this and i stink at it :) Thank You all!

    Read the article

  • How do MaxSpareServers work in Apache?

    - by John Hunt
    I've scoured the web but I can't find out what MaxSpareServers are in Apache MPM prefork.. The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes. Great, but what causes a spareserver to be created? More importantly, when does a spare server go away? I understand that minspareservers are created gradually after the server is started.. How do maxspareservers relate to maxclients? Basically I'm at a bit of a loss on how best to configure Apache.. there's a lot of documentation out there but it isn't that clear. Thanks, John.

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • Virtualmin Configuration

    - by Allen
    I am trying to get Virtualmin setup and have reached a point where my noobish sysadmin skills aren't getting the job done. This is the message I get now when I try and refresh the configuration of Virtualmin. BIND DNS server is installed, and the system is configured to use it. However, the default master DNS server XXXXXX is not a fully qualified domain name. Sendmail is only accepting SMTP connections on the following ports : 127.0.0.1 port smtp. Email from other systems on the Internet will not be accepted. This can be changed in the Sendmail Mail Server module. Please advise what I need to do to get Sendmail configured properly. Thanks!

    Read the article

  • In TCP/IP terms, how does a download speed limiter in an office work?

    - by TessellatingHeckler
    Assume an office of people, they want to limit HTTP downloads to a max of 40% bandwidth of their internet connection speed so that it doesn't block other traffic. We say "it's not supported in your firewall", and they say the inevitable line "we used to be able to do it with our Netgear/DLink/DrayTek". Thinking about it, a download is like this: HTTP GET request Server sends file data as TCP packets Client acknowledges receipt of TCP packets Repeat until download finished. The speed is determined by how fast the server sends data to you, and how fast you acknowledge it. So, to limit download speed, you have two choices: 1) Instruct the server to send data to you more slowly - and I don't think there's any protocol feature to request that in TCP or HTTP. 2) Acknowledge packets more slowly by limiting your upload speed, and also ruin your upload speed. How do devices do this limiting? Is there a standard way?

    Read the article

  • error on remote connection to mysql

    - by Ahmet vardar
    Hi, I ve been trying to connect my dedicated server mysql db from my computers localhost my code is here; $dbhost = 'domain.com'; $dbuser = 'username'; $dbpass = 'pass'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die('Could not connect: ' . mysql_error()); $dbname = 'dbname'; mysql_select_db($dbname); I get that error Could not connect: Lost connection to MySQL server at 'reading initial communication packet', system error: 61 Dedicated server is Linux centos 64 bit, php 3.2.4, mysql 5.1.54 Is there any workaround that ? Thanks

    Read the article

  • Lighttpd not starting - no error

    - by Furism
    I recently installed Lighttpd on Ubuntu Server 10.04 x86_64 and created several websites. What I do is include /etc/lighttpd/vhost.d/*.conf and put a configuration file for each website in that directory. The problem I have is when I "service lighttpd start" I get the message that the service started, there is no error message: root@178-33-104-210:~# service lighttpd start Syntax OK * Starting web server lighttpd [ OK ] But then if I take a look at the services listening, Lighttpd is nowhere to be seen: root@178-33-104-210:~# netstat -tap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:mysql *:* LISTEN 829/mysqld tcp 0 0 *:ftp *:* LISTEN 737/vsftpd tcp 0 0 *:ssh *:* LISTEN 739/sshd tcp6 0 0 [::]:ssh [::]:* LISTEN 739/sshd So I'm looking at ways I could troubleshoot this. I checked in /var/log/lighttpd/error.log and there's nothing in it. Edit: Sorry, I indicated I use CentOS but it's actually Ubuntu Server (I usually use CentOS but had to go with Ubuntu for that one).

    Read the article

  • Exchange 2003 HTTP Account Error

    - by Ryaner
    We are trying to get one of our users connected to our Exchange 2003 server using the HTTP method as they already have an existing Exchange account on another server. The setup goes through and they appear to get connected fine however none of the subfolders are listed. Instead we get one folder of "Error-Pls file a Bug". The usual Google search just throws up nothing useful. Does anyone know how to fix this? Or has anyone actually gotten Outlook (2003 or 2007) to connect to an Exchange 2003 server?

    Read the article

  • OPTIONS request vs GET in Ajax

    - by user41172
    I have a PHP/javascript app that queries and returns info using an ajax request. On every server I've used so far, this works as expected, passing an Ajax GET request to the server and returning json data. On a new install, the query fails and returns nothing-- I inspected the request and it turns out that rather than passing the query as a GET, the server is passing it as an OPTIONS request. Is there any reason for this? I have no idea why this might happen. THanks!

    Read the article

  • How to set up hosting on Heroku and email forwarders on a WHM (cPanel)?

    - by matija
    I'm using DNSimple for managing my records, hosting my site at Heroku and I want to use a Linux WHM (cPanel) for managing emails forwarding (DNSimple has that feature, but it's currently not working properly). Hosting works, but I'm having a hard time getting emails to work. Here are my (pseudo-)records: Type Name TTL Points to --------------------------------------------------------- ALIAS | mydomain.com | 3600 | mydomain.herokuapp.com CNAME | www.mydomain.com | 3600 | mydomain.herokuapp.com CNAME | mail.mydomain.com | 600 | <WHM server IP address> MX | mydomain.com | 600 | <WHM server IP address> NS | mydomain.com | 3600 | ns1.dnsimple.com ... | ... | ... | ... NS | mydomain.com | 3600 | ns4.dnsimple.com There are two more records, SOA and TXT, generated by DNSimple, but I don't think those are relevant. When I add an A-record: A | mydomain.com | 3600 | WHM server IP address and change the mail CNAME and MX records to mydomain.com, emails start working, but then the hosting doesn't work anymore. Is this possible to achieve?

    Read the article

  • Start tomcat webapp with root privileges

    - by Hagay Myr
    I built a webapp that uses libpcap (via jpcap). In order to be able to get the network interfaces list or to bind to a network interface, the application (in this case a webaap that runs from tomcat server) must be running with root privileges. During development I simply ran Eclipse with root privileges (sudo eclipse) and my webapp worked just fine with Eclipse's local tomcat server. However, when I try to deploy my webapp to the "real" tomcat server, it isn't working. I Also tried to start the tomcat6 service with sudo and changed the TOMCAT6_USER definition (defined in /etc/init.d/tomcat6) from "tomcat6" to "root" but it made no difference. What should I do to make it work?

    Read the article

  • MySQL - allow connection from remote machine as root user

    - by Senthil Kumar
    Hi all, When I installed MySQL server in Windows, there was an option "Allow root connection from remote machine". I checked that option and I had no probs when using it. I installed MySQL server in Ubuntu 9.04 using apt-get install. I can connect to the sql server from the same machine but when I try to connect from a virtual machine, it doesn't work. My guess is that I should allow root connection from remote machine. How to do that?

    Read the article

  • Redhat 6 gui installation VS kickstart gives me different packages?

    - by jonaz
    If i do the graphical install and select basic server + aide and screen i get a system with 535 installed packages. If i look at the /root/anaconda-ks.cfg file in that freshly installed system i see: %packages @base @console-internet @core @debugging @directory-client @hardware-monitoring @java-platform @large-systems @network-file-system-client @performance @perl-runtime @security-tools @server-platform @server-policy @system-admin-tools pax python-dmidecode oddjob sgpio certmonger pam_krb5 krb5-workstation nscd pam_ldap nss-pam-ldapd perl-DBD-SQLite aide screen If i then install a NEW system using a kickstart only containing those packages i get 620 installed packages. So basicly my question is why does the system install almost 100 more packages when using kickstart compared to the GUI installation when the exact same packagegroups are selected?

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • Google App Engine says "Must authenticate first." while trying to deploy any app

    - by Oleksandr Bolotov
    Google App Engine says "Must authenticate first." while trying to deploy any app: me@myhost /opt/google_appengine $ python appcfg.py update ~/sda2/workspace/lyapapam/ Application: lyapapam; version: 1. Server: appengine.google.com. Scanning files on local disk. Scanned 500 files. Scanned 1000 files. Initiating update. Email: <my_email_was_here>@gmail.com Password for <my_email_was_here>@gmail.com: Error 401: --- begin server output --- Must authenticate first. --- end server output --- We are getting this message with any appliation and under any developer account avialable to us That's what we have installed: App Engine SDK - 1.3.2 PIL - 1.1.7 Python - 2.5.5 pip - 0.6.3 ssl - 1.15 wsgiref - 0.1.2 So, what can it be? Is it well known problem?

    Read the article

  • Using old RAID configured disk after new disk has been used in the controller

    - by Narendra
    I have Dell Poweredge T100 server with Dell SAS 6 and two hard disk on RAID 1. Last week the server died including one RAID 1 hard disk. We sent the server for repair and the problem with PSU was fixed. But the repair guys also checked the RAID controller by configuring new RAID with their test hard disk. Now if I install one working RAID 1 disk and one new disk, will the RAID controller let me continue my old RAID 1 and resync the new disk and continue? What I fear is the RAID controller will want the test hard from repair guys. Thus I have to re configure RAID 1 forcing me to wipe the working disc. If so, I've to backup the working disc, reconfigure RAID 1 and reinstall? Or is there better way? Note: I'm using DELL SAS confiugratio utility to manage RAID. (Press CTRL+C after BIOS)

    Read the article

  • "Verifying DMI Pool" hang caused by raid array..

    - by Ling
    Hi Experts, I have a problem, I obtained a new server with 4 hard drives (2 500 gig, 2 two TB), and an adeptec RAID card. I arranged them in two arrays with RAID 1 (500 gigs together as primary and the 2 TB drives for lots of data). When both arrays are configured, the server hangs while booting at message "Verifying DMI Pool", however if I remove the second array from the configuration the server boots fine. I have checked they are both on different channels, I have disabled all other peripherals from the boot menu and ensured the hard drive is #1. I have booted into the linux rescue mode and checked that it is reading both arrays fine. What else could be causing these problems? Thanks

    Read the article

  • How can I restrict SSH access when the source IP is dynamic

    - by Supratik
    Hi I want to protect SSH access to our live web server from all IP's except our office static IP. There are some employees who connects to this live server from their dynamic IP's. So, it is not always possible for me to change in the iptables rule in live server whenever the dynamic IP of the employee changes. I tried to put them in office VPN and allowed only SSH access from office IP but the office connection is slow in compared to our employee's private internet connection, moreover it adds an extra overhead to our office network. Is there any way I can solve this problem ?

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • Is it still "wrong" to require TLS on incoming SMTP messages

    - by jackweirdy
    According to the STARTTLS Spec Section 5: A publicly-referenced SMTP server MUST NOT require use of the STARTTLS extension in order to deliver mail locally. This rule prevents the STARTTLS extension from damaging the interoperability of the Internet's SMTP infrastructure. A publicly-referenced SMTP server is an SMTP server which runs on port 25 of an Internet host listed in the MX record (or A record if an MX record is not present) for the domain name on the right hand side of an Internet mail address. However, this spec was written in 1999, and considering it's 2014, I'd expect most SMTP clients, servers, and relays to have some kind of implementation of STARTTLS. How much email can I expect to lose if I require TLS for incoming messages?

    Read the article

  • OSSEC is not running

    - by batman
    I have an two ec2 instances. In one I have installed ossec server and in other I have installed ossec agent. Here are my server config INBOUND (security group/firewall) : port:514 source:0.0.0.0/0 port:1514 source:0.0.0.0/0 But it seems to be not working. In my agent log file I keep on getting: 2012/08/28 06:52:52 ossec-agentd: INFO: Using IPv4 for: x.x.x.x.x.x . 2012/08/28 06:53:13 ossec-agentd(4101): WARN: Waiting for server reply (not started). Tried: 'x.x.x.x.x'. Edit: Running sudo netstat --inet -nlp | grep ossec. I'm getting: udp 0 0 0.0.0.0:1514 0.0.0.0:* 26027/ossec-remoted Where I'm making the mistake?

    Read the article

  • Sendmail relay out of Amazon EC2?

    - by Stephen Belanger
    I have a site running on CentOS 5.4 through Amazon EC2. Unfortunately, Amazon has had nothing but trouble with their entire IP range getting black listed from spam tracking services regularly. I need a mail server, so I setup an smtp server elsewhere that I want to send to, but I can't just send directly to it through PHP because the direct smtp request is way too slow. What I want to do is relay through sendmail, but I've never used sendmail before, so I have no idea how to configure it. All I want is for all emails sent from localhost to be relayed to one specific external server, but I don't know how to do that. I tried to find a tutorial online, but couldn't find anything that was particularly clear as to how I go about doing that. Can anyone help me out?

    Read the article

  • Linux - How to manage the password of root?

    - by Jonathan Rioux
    We have just deployed a couple of Linux server. Each sysadmin will have his own account on the server (i.e.: jsmith), and will connect using SSH with a certificate which will be put into the "authorized_keys" file in their home directory. Once connected on the server, if they want to issue an elevated command, they will do like: sudo ifconfig They will then enter the root password. What I would like to know now are the best practices in managing that root password. Should I change it periodicaly? And how do I share that new password with the sysadmins? **Of course I will disable the root logon in SSH.

    Read the article

  • Ping: Destination Host Unreachable, from the destination host itself

    - by phunehehe
    I have a server that responds in a weird way to ping: $ ping hostname.com PING hostname.com (<IP address>) 56(84) bytes of data. From hostname.com (<IP address>) icmp_seq=1 Destination Host Unreachable From hostname.com (<IP address>) icmp_seq=2 Destination Host Unreachable From hostname.com (<IP address>) icmp_seq=3 Destination Host Unreachable From hostname.com (<IP address>) icmp_seq=4 Destination Host Unreachable I'm confused, as the messages come from the server that I want to ping, and at the same time it's saying Destination Host (itself) Unreachable. Pinging by IP address yields the same result. The server is online and operating normally. What could be the cause?

    Read the article

< Previous Page | 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520  | Next Page >