Search Results

Search found 21347 results on 854 pages for 'www mortenbock dk'.

Page 201/854 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • Redirect domains with lighttpd?

    - by matt
    I'm working on getting a Wordpress MU install running on my VPS. I enabled the 'simple-vhost' mod and can access the site fine. The problem is I can only get to it from domain.com. If I try www.domain.com I get shown the lighthttpd page? I'd like to get everything pointing to one place. My DNS records look like this: *.domain.ORG xx.xx.xx domain.ORG 300 A domain.ORG xx.xx.xx domain.ORG 300 A WWW.domain.ORG domain.ORG domain.ORG 300 CNAME domain.ORG domain.ORG domain.ORG 300 MX What is happening? Thanks

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Where to find a list of online TV/video/Webcam sources ?

    - by Frank
    I know there are lots of web sites that offer online TV/Stream viewing, such as : http://tvunetworks.com , http://www.hulu.com/ and more, but the source of their streams are usually well hidden, I wonder if there is any open source project that collects the online TV/video/Webcam sources so that TV stations and individuals can publicly list their stream source in the following format, you can copy the urls below into a browser and start watching : Greek TV|mms://eu02.egihosting.com/938657?MSWMExt=.asf Turkish TV|http://www.bizidinle.com/player/SAlone.asp?id=7 Even if there is no public open source project, is there any where that I can find such a list so that I can get to the stream urls ?

    Read the article

  • transfer code from one server to other server.

    - by Kamlesh Bhure
    I wanted to transfer new code into my new production server. I have code files on my development server. Instead of uploading files using FTP from my local machine, there is other way to transfer code from one server to other. What I am thinking I will make zip file of whole code to be transfer and place it in webroot. So that it would be accessible in internet with some link http://www.mydomain.com/code.tar.gz now on the other server i will just run command wget http://www.mydomain.com/code.tar.gz Will this transfer done in few seconds...? May I know is this correct approach?

    Read the article

  • Authentication issues setting up iRedMail on Debian

    - by Sergio Rinaudo
    I'm setting up an exchange server using iRedMail. Following the official iRedMail installation guide (http://www.iredmail.org/install_iredmail_on_debian.html) and the Digital Ocean guide (https://www.digitalocean.com/community/articles/how-to-install-iredmail-on-ubuntu-12-04-x64) I was able to install iRedMail without any problems, so I have all the services up and running. I can configure domains and emails using iRedAdmin BUT I have problem both sending and receiving email, what I get from Roundcube is 'Authentication error' when trying to send an email. Also I can't receive anything. I also tried to connect to the mx server using telnet, it connects, but after the STARTTLS command, when I start to write "MAIL FROM:" the connection is lost. Something in the configuration is not working (at the moment I have the configuration written by the iRedMail installation) but I do not know where, I hope someone can enlight me! Thank you

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Apache: How can I make my localhost on 192.168.1.101 visible from 192.168.1.102?

    - by takpar
    Hi, I've setup a Apache web server on Ubuntu Linux. I can see http://localhost well. But I can't see localhost from other machines in my network using IP address: http://192.168.1.101 I added the lines below to my apache conf: `Allow from 192.168.1` but it did not work. It says "the connection has timed out". what should i do? PS: adp@adp-desktop:~$ sudo netstat -ap | grep apache tcp 0 0 *:www *:* LISTEN 10581/apache2 tcp 0 0 localhost:www localhost:46017 ESTABLISHED 10586/apache2

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Lighttpd mod_rewrite conversion from .htaccess format

    - by hoball
    Hello, I am using lighttpd as webserver and is having an issue about mod_rewrite. Currently I have a set of Apache .htaccess rewrite rules from a PHP script: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.*)$ index.php [QSA,L] In my understanding, if the requested URI is not a file/directory/sym-link, append it to index.php eg. www.a.com/hello/world --> www.a.com/index.php/hello/world I attempted to convert to lighttpd specification: url.rewrite-if-not-file = ( "^(.*)$" = "index.php/$1" ) However, it doesn't work. I suspect that is due to misuse of $1. I tried to use $0/%0 or something else but they fail. Would you please provide me a hint on making the syntax work? Thank you!

    Read the article

  • Troubleshooting a slow database server with no load

    - by user1721724
    I'm getting ready to soft launch my website and I've run into some problems with what I think is being caused by my MySQL database running on Fedora. All websites run fine, just as I'd expect, but any pages that establish a database connection hang until the connection is established, and then bang, the site loads as it should. Ex. My landing page (http://www.thrusong.com) doesn't make a database connection and loads quickly. User profile pages (http://www.thrusong.com/john) make a database connection and load slowly, even though most of the data comes from memcached and the database currently has no load on it. This problem just came up yesterday when my router died and I began using my Pace 2Wire modem with built-in router. Before, my old router was set to handle everything. My ISP says the settings in the modem are correct. Any ideas? Thanks in advance.

    Read the article

  • Rsync : execute permission required

    - by user651488
    I'm using rsync between two servers to transfer files. The problem is some files are not transferred. I get this error : rsync: readlink "/var/www/index.html" failed: Permission denied (13) So I check permissions on the server and after make tests, I notice a file is transferred only if it has these permissions : R-W ! If the file have these permissions : R--, Rsync can't download it !? Command: /usr/bin/rsync -avzr -e "/usr/bin/ssh -i /home/replication/thishost-rsync-key" [email protected]:/var/www/index.html ./ Is it a bug with Rsync ? I find any information about this problem. Thanks for your help Debian Etch 2.6.30 Rsync 2.6.9 protocol version 29

    Read the article

  • Help on using mod_rewrite to serve I18N static site

    - by Guandalino
    My static site www.example.com is translated in different languages and files are organized in this hierarchy: / /de index.html seite-1.html /en index.html page-1.html /it index.html pagina-1.html The root contains no files, just one subdirectory for each language the site is translated in, while subdirectories contain pages translated (both content and file name are) in the language corresponding to subdirectory name, de, en, it, etc. The question is: how to configure mod_rewrite so that when a client visits www.example.com it is taken to the correct version of the site, falling back to english version if the required locale is not supported (i.e. Accept-Language header doesn't exist or specifies a language for which the site is not available, e.g. fr)? Thanks for any pointer, I'm here to provide further details or feedback! Best regards

    Read the article

  • MySQL/Apache: Replace spaces with underscores only in certain URLs

    - by javipas
    I'm having a problem with some images I'm using on my WordPress blog. After a migration I renamed every image replacing spaces with underscores, so HIDDEN_264_4062_FOTO_IDF los MID.jpg was renamed to HIDDEN_264_4062_FOTO_IDF_los_MID.jpg But althought the trick was necessary and worked for most of the posts, some of them try to find the old image, with spaces: This is not found http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF%20los%20MID.jpg and this should be the right URL http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg Careful, though, 'cause the "%20" is only shown on the browser: the text on the database shows spaces, not "%20". I'd like to know if maybe I could make a SQL query in my WordPress MySQL database that replaces spaces in .jpg files with underscores. The path of the images is always the same, so the rule should transform this: /files/HIDDEN_264_4062_FOTO_IDF los MID.jpg /files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg the "/files/HIDDEN_264_" part is always the same, but the rest varies. Is some way to perform this? Maybe a rewrite rule on Apache (our current webserver)?

    Read the article

  • Rsync and Windows 7

    - by Nate
    Can someone give me any tips on setting up some sort of Rsync server/client on Windows 7 to run rsync between both my web hosting server, and a backup server that I have running Ubuntu? I've tried setting it up with this tutorial: http://www.youtube.com/watch?v=CvwdkZLNtnA Using copssh, and cwrsync. Ran into all sorts of troubles, including not being able to get cwrsync to run (it installs properly, but never starts up), and copssh not generating the keys at all. The guy was running Windows Server 2003, though, so I'm guessing the problems could just be because I'm running Windows 7. I've been trying to set it up with my Windows machine being the rsync server, and then Ubuntu and my webhosting VPS as the clients, but I realize it may be easier (and make more sense) to just setup the rsync server on Ubuntu, and then an rsync client on Windows 7? Can anyone point me in the right direction? I'm thinking of using this guide: http://www.gaztronics.net/rsync.php It seems a bit outdated, though.

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • Application outside document root in Apache/CentOS

    - by liz
    I have a PHP application running in Apache on CentOS 6. The document root is pointed to a specific app folder: /var/www/my-project/app I'm trying to get phpMyAdmin running on the same server but I don't want to put it in the application folder. Instead I'd like to put it here /var/www/apps/phpmyadmin I'm using a sub domain for the server. What's the easiest way for me to get access to phpMyAdmin? Another subdomain? sub subdomain? Re-direct a folder?

    Read the article

  • Is there an easy way to always redirect facebook.com to facebook.com/events/list (Firefox, Osx)

    - by Tor Thommesen
    I often use facebook for communication while working. When I do, I'd rather not see the facebook newsfeed. Unfortunately it's hard to remember to not go to the frontpage and use facebook.com/messages or facebook.com/events. Is there any way to always redirect from the url https://www.facebook.com/ to https://www.facebook.com/events/list? I use firefox on osx. I have tried the redirector addon and I'm not able to make it work, not sure if I'm missing something obvious or if it's outdated/buggy.

    Read the article

  • Chrome: automatically redirect me to highest ranking search result, like Firefox does

    - by Siim K
    How to emulate the Firefox (I'm using v3.6) address bar search redirection in Google Chrome? For example, if I type... imdb moon ...to the address bar and press Return in Firefox then it redirects me straight to http://www.imdb.com/title/tt1182345/ (and I've not visited the page before) When I try this is Chrome then I just get the google search page http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=imdb+moon So seems like Firefox redirects automatically to the highest ranking search result URL - is there a setting or add-on for Chrome to achieve the same behaviour?

    Read the article

  • Server speed: sharing one script.php or using many copies the same script.php

    - by Marco Demaio
    Let's assume: I have thousands of domains on same Apache server. Each domain is in a folder under server public_html document folder, so it can be accessed by calling "www.somedomain.com" or by calling "www.serverdomain.com/somedomain_folder" In each domain there is a website who needs a certain script.php (identical for each domain). From a coding point view, its obvious that it's better to use a unique script.php, so when i update it with new features/bug fixes etc, I need to update on server only one file and it will work for all domains. But from a server point of view? If i use a unique script all domains will access it at the same time, will the server run slower compared to the situation where each domain called its own script?

    Read the article

  • Overriding Apache auth directive

    - by Machine
    Hi! I'm trying to allow public access to a method that generates a WSDL-file for our API. The rest of the site is behind basic auth protection. Can you guys take a look at the following virtual-host configuration and see why the override does not take place? <VirtualHost *:80> ServerName xyz.mydomain.com DocumentRoot /var/www/dev/public <Directory /var/www/dev/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all SetEnv APPLICATION_ENV testing </Directory> <Location /> AuthName "XYZ Development Server" AuthType Basic AuthUserFile /etc/apache2/xyz.passwd Require valid-user </Location> <Location /api/soap/wsdl> Satisfy Any allow from all </Location> </VirtualHost>

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >