Search Results

Search found 98447 results on 3938 pages for 'sql server denali'.

Page 1667/3938 | < Previous Page | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674  | Next Page >

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • DNS Help: Move domain, not mailserver

    - by Preserved
    I'm in the middle of launching a new website for an already-in-use domain. The domain has a complicated email system so we'd like to move that over to the new server a bit later on. Currently the domain DNS is managed by the current webhost. I plan on moving the DNS management back to Network Solutions, then point the A record to the new website's IP. However, currently the DNS has the MX record the same as the A record. When NetworkSolutions is managing the DNS, and I point the A record to the new IP, then the MX record can't be the A record.. Right now: A Record mydomain.com points to IP address 198.198.198.198 MX record mydomain.com points to IP address 198.198.198.198 What I want: A Record mydomain.com points to IP address of new server MX record somehow points to current existing mailserver Does this even make sense?

    Read the article

  • Own website fails to load first time

    - by AmazingDreams
    I have a website running on a VPS, every time I first try to load the website the connection times out. If I press try again, it loads directly. I'm not sure whether this is a DNS issue or a server issue. As far as I know everything is set up correctly. Also, it has been doing this from the moment I got this server and set up my domain name. And that's about two to three months ago. You may take a look here: http://www.wegotcha.nl/ As you can see at this moment it's just an image, there are no scripts running in the background or anything. The only error Apache gives me is that favicon.ico cannot be found. It's an Apache webserver running on Ubuntu 12.04.1 (newest version) I update all packages almost every day (apt-get update && apt-get upgrade). I am merely an amateur on the area of webservers so any help will be appreciated. :)

    Read the article

  • How to identify which website on my instance is receiving lots of traffic?

    - by Bob Flemming
    I am new to server administration and have just setup a new quad core instance which hosts around 15 websites. Over the past couple of days my server load has been averaging at around 15.00. I believe it is because of one (or maybe more) websites are getting spammed by spambots. Typing 'top' at the command line shows many processes from user 'www-data' which indicates lots of web traffic. Is there an easy way identify which one of my sites is taking a hammering? Reading the apache error logs is a very difficult tasks as most of the websites receive daily traffic of 10,000 + unique users. Any help would be appreciated!

    Read the article

  • Postfix sending mail back to itself? (Ubuntu 9.10)

    - by webo
    I setup Dovecot and Postfix using the "Dovecot-Postfix" package with SASL and all that. The Dovecot part seems to be working fine but I'm having issues with Postfix. Whenever I send a message to another address through the postfix server, two things happen. the message never gets to the other address (even when I request a delivery notification, it says that it's been delivered but it's not in the spam box in the other inbox or anything) The message comes back to my inbox through Dovecot as though I sent it to myself internally. e.g. I send an email through my postfix server to my gmail account, 10 minutes later nothing shows up in my gmail account but the message comes back to me as though I was sending it to my internal address (with no errors) Any ideas?

    Read the article

  • How to sync passwords one-way between windows domains without trust relationship?

    - by Franco C.
    We're migrating from Windows 2003 to 2008 SBS. We will run concurrently for a short period of time. I cannot establish a trust relationship between Server 2003 & Server 2008 SBS and I would like to know if there is a way to sync the passwords between 2003-2008? For example, I would like to dump the pre-encrypted passswords to a file in 2003 and then use this to update the passwords for the correspoding usernames in 2008 SBS. Is this possible? I have no need to ever see the clear text version of the passwords. I see one commercial product, but it hardly seems worth it given the temporary nature of my project... Thanks, Franco

    Read the article

  • Client-side certificates

    - by walshms
    My company purchased a wildcard certificate from a vendor. This certificate was successfully configured with Apache 2.2 to secure a subdomain. Everything on the SSL side works. Now I'm required to generate x509 client-side certificates to issue for this subdomain. I'm following along this page: (http://www.vanemery.com/Linux/Apache/apache-SSL.html), starting with "Creating Client Certificates for Authentication". I've generated the p12 files and successfully imported them into Firefox. When I browse to the site now, I get an error in FireFox that says "The connection to the server was reset while the page was loading." I think my problem is coming from not signing the client-side correctly. When I sign the client-side certificate, I'm using the PEM file (RapidSSL_CA_bundle.pem) from RapidSSL (who we bought the certificate from) for the -CA argument. For the -CAkey argument, I'm using the private key of the server. Is this correct?

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • Nginx - basic http authentication on PHP-script

    - by half_bit
    I added a PHP-Script that serves as "cgi-bin", Configuration: location ~^/cgi-bin/.*\.(cgi|pl|py|rb) { gzip off; fastcgi_pass 127.0.0.1:9000; fastcgi_index cgi-bin.php; fastcgi_param SCRIPT_FILENAME /etc/nginx/cgi-bin.php; fastcgi_param SCRIPT_NAME /cgi-bin/cgi-bin.php; fastcgi_param X_SCRIPT_FILENAME /usr/lib/$fastcgi_script_name; fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; } PHP-Script: <?php $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $newenv = $_SERVER; $newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"]; $newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"]; if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) { $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv); if (is_resource($process)) { fclose($pipes[0]); $head = fgets($pipes[1]); while (strcmp($head, "\n")) { header($head); $head = fgets($pipes[1]); } fpassthru($pipes[1]); fclose($pipes[1]); fclose($pipes[2]); $return_value = proc_close($process); } else { header("Status: 500 Internal Server Error"); echo("Internal Server Error"); } } else { header("Status: 404 Page Not Found"); echo("Page Not Found"); } ?> The problem with it thought is that I cannot add basic authentication. As soon as I enable it for location ~/cgi-bin it gives me a 404 error when I try to look it up. How can I solve this? I thought about restricting access to only my second server where I then add basic authentication over a proxy, but there must be a simpler solution. Sorry for the bad title, I couldn't think of a better one.

    Read the article

  • High shmmax value in Redhat 6.3

    - by xpapad
    We are using Redhat 6.3 with 30G RAM to host our Postgres server. The (default) shmmax value is 68,719,476,736 In some forums I have read that having an shmmax value larger than the RAM causes extensive paging, but in the Redhat forums it warns against changing a kernel parameter that is already configured to a value larger than the minimum requirements for an environment. In ServerFault I've read that this probably has no impact. So is there any impact of having shmmax value RAM in a DB server, or the kernel understands this and handles it appropriately? Thanks

    Read the article

  • Mail being sent as root on Ubuntu 14.04

    - by Benjamin Allison
    I'm really struggling with this. I'm trying to set up this server to send mail using Gmail's SMTP. Google keeps bouncing the messages, saying that that Authentication is required: smtp.gmail.com[74.125.196.109]:25: 530-5.5.1 Authentication Required. Learn more at smtp.gmail.com[74.125.196.109]:25: 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 But it seems my server is trying to send mail as [email protected]. I'm baffled. Here's what I've done so far: Updated mail.cf relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_use_tls = yes Created /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:password Then did the following: sudo chmod 400 /etc/postfix/sasl_passwd sudo postmap /etc/postfix/sasl_passwd cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart I can't for the life me get a mail message to send, or change the default mail user from [email protected] to [email protected] (FWIW, I'm using Google Apps, that's why it's not a .gmail address).

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • Using Postgres on Volusion site

    - by Sean
    Okay, I apologize if this is so basic that I should know the answer, but I'm not sure where else to go for the solution. I would like to start a small store site using Volusion. I would like some custom ASP code to query data that I currently have in a Postgres database. I would like to be able to just move the database file(s) onto the Volusion server via ftp and access them from my store site (via the custom ASP). Do I need to install Postgres onto the server to do this, or can I just ftp my database file(s) and access them with the ASP code? I think I need to install Postgres, but would like to do this without such an installation if possible.

    Read the article

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • HTTPS version of page throws 404, regular HTTP appears fine?

    - by Ryan
    I'm having a strange issue with a website in IIS on Windows Server 2003. It has a valid wild card certificate on it, however when I use HTTPS on the page I get a 404 not found. Without HTTPS it shows up fine. Also, if I go to the domain root of the site using HTTP the homepage shows up, but with HTTPS it REDIRECTS ME to a totally different website installed on the same IIS server. I am quite confused. I tried giving each site a unique IP address but it didn't change anything, I also tried changing the SSL ports, no luck. This IIS is setup to run PHP also. What could I check to resolve this?

    Read the article

  • Updating to Exchange 2013 - any way to do it now?

    - by TomTom
    Exchange 2013 is out, available for some epople already. Got if from the VLC Center, now trying to get an upgrade path that works for some customers. Problem: There is no upgrade. It is "install on new Server, move mailboxes. This means coexistence with Exchagne 2010 for the time to move the Mailbox. Sadly the only compatible Exchange is Exchange 2010 Sp3 - which is not going to be bout for quite some time. Any way to still do an update? Backup, restore to new Server? Any beta of the SP that is good enough to ONLY move the mailboxes? I do not care about the rest - this really is "install Exchange 2013, move mailboxes, UNINSTALL 2010". I am quite - ah - unhappy that at the end the only one who will be able to intall 2013 are new companies right now.

    Read the article

  • How do I stop someone from saturating my line & wasting CPU cycles

    - by JoshRibs
    My web host shows inbound & outbound traffic with mrtg. I have a steady 3.5mbps inbound traffic from Nigeria. Even assuming the source IPs & destination ports are blocked with Iptables & verifying nothing is listening on those ports, will the traffic still always pass through the switch & "get" to my server (where my server wastes CPU cycles "dropping" the packets)? Assuming I was setup with a hardware firewall, the traffic would still show in mrtg assuming the firewall is behind the switch? So is there any way to stop someone from saturating your 100mbps line, if they also have a 100mbps line? Other than filing an abuse complaint with the kind folks in Nigeria?

    Read the article

  • IP-restricted port forwarding with iptables

    - by Tom
    For an example, I have two authorized client computers, 1.1.1.1 and 2.1.1.1. My server running iptables is 3.1.1.1 and my firewalled web server is 4.1.1.1. When one of the authorized client IPs connects to 3.1.1.1 on port 80, I would like the connection to be forwarded to 4.1.1.1 on port 8888. If any other IP attempts to connect I would like it to refuse/drop the connection. What iptables config would accomplish this? Is there something more specific out there that would be better suited for this job?

    Read the article

  • Outlook + Exchange 2007: it is possible to rid of local OST files?

    - by kdl
    I am looking for a solution which would allow to use a convenience of Outlook as a mail client app while at the same time have no PST or OST files on a local computer. Even in 'non-caching' mode Outlook creates an OST file where it downloads everything from the Exchange server. OWA does not create any local files (except cookies I believe) but lacks some of the nice features Outlook has. Would it be feasible to place OST files on a network share? Maybe the solution exists for some other client+server pair?

    Read the article

  • Forcing rsync to convert file names to lower case

    - by SvrGuy
    We are using rsync to transfer some (millions) files from a Windows (NTFS/CYGWIN) server to a Linux (RHEL) server. We would like to force all file and directory names on the linux box to be lower case. Is there a way to make rsync automagically convert all file and directory names to lower case? For example, lets say the source file system had a file named: /foo/BAR.gziP Rsync would create (on the destination system) /foo/bar.gzip Obviously, with NTFS being a case insensitive file system there can not be any conflicts... Failing the availability of an rsync option, is there an enhanced build or some other way to achieve this effect? Perhaps a mount option on CYGWIN? Perhaps a similar mount option on Linux? Its RHEL, in case that matters.

    Read the article

  • Cannot connect to HTTPS port on Ubuntu

    - by Simpleton
    I've installed a new SSL certificate and set up Nginx to use it. But requests time out when trying to hit HTTPS on the site. When I telnet to my domain on port 80 it connects, but times out on port 443. I'm not sure if there's some defaults on Ubuntu preventing a connection. UFW status shows: 443 ALLOW Anywhere netstat -a shows: tcp 0 0 *:https *:* LISTEN nmap localhost shows: 443/tcp open https The relevant block in the Nginx config is: server { listen 443; listen [::]:80 ipv6only=on; listen 80; root /path/to/app; server_name mydomain.com ssl on; ssl_certificate /etc/nginx/ssl/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass http://mydomain.com; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

    Read the article

  • How to force or redirect to SSL in nginx?

    - by Callmeed
    I have a signup page on a subdomain like: https://signup.mysite.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html { server { listen 443; server_name signup.mysite.com; ssl on; ssl_certificate /path/to/my/cert; ssl_certificate_key /path/to/my/key; ssl_session_timeout 30m; location / { root /path/to/my/rails/app/public; index index.html; passenger_enabled on; } } } What can I add so that people who go to http://signup.mysite.com get redirected to https://signup.mysite.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that)

    Read the article

  • Force .js files saved in ANSI encoding to show in UTF-8 on IIS 7.5

    - by Xcarpa
    I'm migrating a web system that now works on windows server 2003 IIS 6, to IIS 7.5 on windows 2008 server This system generates javascript files with accented characters in ANSI (Portuguese - Brazil). These javascripts shows for example alert messages. In IIS 6 I have no problem with that, but now using IIS 7.5 if those files are not in UTF-8, the accented characters do not appear correctly. Do we have any way to force these files, even in ANSI, to be processed by IIS 7.5 as UTF-8 ? Thank you ! Cheers Xcarpa

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

< Previous Page | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674  | Next Page >