Search Results

Search found 21550 results on 862 pages for 'www jacob'.

Page 76/862 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to register a .cn domain

    - by user359650
    I would like to register a .cn domain. I found the below pages which list the officially accredited registrars: -based in China: http://www.cnnic.net.cn/html/Dir/2007/06/05/4635.htm -based outside China: http://www.cnnic.net.cn/html/Dir/2007/06/25/4671.htm Needless to say that the registrars based in China have their website in Chinese which effectively prevents me from using them. There are 11 oversea registrars and I'm wondering which one I should be using. If you look at the big names, they all have their .cn registered (facebook.cn, microsoft.cn...), and whois only shows a Sponsering registrar which doesn't seem to be offering domains registration services directly to consumers: $ whois facebook.cn Domain Name: facebook.cn ROID: 20050304s10001s04039518-cn Domain Status: ok Registrant ID: tuv3ldreit6px8c7 Registrant Organization: Facebook Inc. Registrant Name: Facebook, Inc. Registrant Email: [email protected] Sponsoring Registrar: Tucows, Inc. http://www.tucowsdomains.com/ only seems to offer domain-related help but not registration. $ whois microsoft.cn Domain Name: microsoft.cn ROID: 20030312s10001s00043473-cn Domain Status: clientDeleteProhibited Domain Status: clientUpdateProhibited Domain Status: clientTransferProhibited Registrant ID: mmr-44297 Registrant Organization: Microsoft Corporation Registrant Name: Domain Administrator Registrant Email: [email protected] Sponsoring Registrar: MarkMonitor, Inc. https://www.markmonitor.com/ seems to offer registration but only to "big" customers, and definitely not to consumers like me via a web portal. Q: How do big companies register their .cn domains? How consumers like us should do it?

    Read the article

  • Wordpress redirects to itself endlessly

    - by iTayb
    I've just upgraded to last version (2.9.1) from kinda late version (2.2.1). After the upgrade I've realized that you cannot access wordpress from my .com domain, however it is possible via other subdomain! db-he.110mb.com works fine while http://www.db-he.com doesn't. That both redirect to the same server, and configurations are fine. However you cannot surf index.php (which is wordpress'). www.db-he.com/index.php is doing a permanent redirect to www.db-he.com/index.php for some reason. Problem is with wordpress only. All other files works fine. For example, changes.txt can be accessed from both links: www.db-he.com/changes.txt db-he.110mb.com/changes.txt For some reason, it seems more a server problem than a wordpress problem. What can I do?

    Read the article

  • Git clone on an ovh host server

    - by newben
    I want to do a git-clone from an ssh connection, on an ovh host-server, but it does not work: Here's the command I entered: git clone ssh :/ / [email protected] / www / (and all variations /. Git / www / .git, / www / .git / ... ) This is the message that I invariably get: fatal: '/ www': unable to chdir or not a git archive fatal: The remote end hung up unexpectedly Moreover, the command git clone "ssh :/ / [email protected] / ~ / forumdesthinktanks.git" responded with: Permission denied, please try again. [email protected] 's password: While the ftp password is correct. Finally, the commands git clone ssh :/ / [email protected] /. Git and git clone ssh :/ / [email protected] / ~ / forumdesthinktanks.git do not work (until the terminal's time out). I'm using a terminal from my Mac.

    Read the article

  • How to configure g-wan to use virtual hosts?

    - by Jan
    Say I have a domain foo.com and a server accessible at 50.60.70.80. I have configured the DNS entries so that foo.com and www.foo.com point to 50.60.70.80. I have g-wan running on the web server. Now I want to host different web sites on foo.com and on www.foo.com. According to the documentation I have to configure a root host and optionally some virtual hosts. So I choose foo.com to be the root host. www.foo.com is a virtual host. My problems is that g-wan seems to ignore my virtual host. No matter whether I use foo.com or ww.foo.com g-wan always serves the foo.com content. This is my g-wan "config": /gwan/0.0.0.0_80/#movq.org /gwan/0.0.0.0_80/$www.movq.org What am I doing wrong here?

    Read the article

  • Apache permission Problems

    - by swg1cor14
    Ok all my files and folders are set as owner of vsftpd:nogroup. FTP program can upload and create and do everything. But when I use the PHP command mkdir, I get a Permission Denied even though the folder its creating it in is set to chmod 777. IF i set the base folder to user www-data and group www-data, PHP mkdir will work. However, I can't use FTP to delete or upload to that folder. /uploads is base folder. I use PHP mkdir to create a directory in there: if (!is_dir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/')) { @mkdir($_SERVER['DOCUMENT_ROOT'] . "/uploads/" . $_REQUEST['clientID'] . '/video/', 0777); } If /uploads is vsftpd:nogroup then PHP mkdir will give a Permission Denied error. If /uploads is www-data:www-data then PHP mkdir WILL work, but I cant continue to FTP anything in that folder that was just created. If /uploads is vsftpd:www-data then PHP mkdir will give a Permission Denied error. How can I create a directory with PHP and still be able to access it via FTP?

    Read the article

  • Mounted HDD not having enough permissions from Apache/PHP

    - by Dan
    Piwigo gallery, on apache and php, CentOS 6. The root system is a RAID 128GB. /var/www/html is on the root file system. Mounted the 320GB hdd to /var/www/html/320 using defaults, it's an ext4 fs. Put a symlink to it in /var/www/html/galleries which is read by the gallery script so I can upload images to there, then click sync. It gives me the error: [./galleries/] PWG-ERROR-NO-FS (File/directory read error) PWG-ERROR-NO-FS: The file or directory cannot be accessed (either it does not exist or the access is denied) chmod 777 set on /dev/sdb1, /var/www/html, and /var/www/html/320 as well as the symlink galleries too. All recursive. chown apache:apache to everything too. PHP just can't read/write to it. I tried with and without the symlink, I've tried everything I can think of. Nothing. Any ideas how I can give apache/php permission to read/write to this drive? With 777 permissions all around it should already be able to.

    Read the article

  • Creating 2 IIS asp.net applications and making 1 the root.

    - by Shawn Mclean
    I'm using godaddy. I have 2 applications I want to install on the server. One named en and the other fr (english and french). On the en application, I checked Make Application Root. Now I assumed I could go to www.mysite.com and it automatically loads www.mysite.com/en/. I have to go to the directory manually. How do I fix this? My file structure is: www.mysite.com/en for the en app and www.mysite.com/fr for the fr app.

    Read the article

  • sftp chroot access via SSH

    - by Cudos
    Hello. I have this setup in sshd_config: AllowUsers test1 test2 Match group sftpgroup ChrootDirectory /var/www X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp Match user test2 ChrootDirectory /var/www/somedomain.dk X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp I am trying to restrict test2 to only use /var/www/somedomain.dk For some reason when I try to login e.g. with Filezilla on account test2 I get this error: "Server unexpectedly closed network connection" The users are created and works. the SSH service has been stopped and started. test1 works when using e.g. filezilla and the root of the connection is /var/www. What am I doing wrong?

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • mod rewrite help

    - by Benny B
    Ok, I don't know regex very well so I used a generator to help me make a simple mod_rewrite that works. Here's my full URL https://www.huttonchase.com/prodDetails.php?id_prd=683 For testing to make sure I CAN use this, I used this: RewriteRule prodDetails/(.*)/$ /prodDetails.php?id_prd=$1 So I can use the URL http://www.huttonchase.com/prodDetails/683/ If you click it, it works but it completely messes up the relative paths. There are a few work-arounds but I want something a little different. https://www.huttonchase.com/prod_683_stainless-steel-flask I want it to see that 'prod' is going to tell it which rule it's matching, 683 is the product number that I'm looking up in the database, and I want it to just IGNORE the last part, it's there only for SEO and to make the link mean something to customers. I'm told that this should work, but it's not: RewriteRule ^prod_([^-]*)_([^-]*)$ /prodDetails.php?id_prd=$1 [L] Once I get the first one to work I'll write one for Categories: https://www.huttonchase.com/cat_11_drinkware And database driven text pages: https://www.huttonchase.com/page_44_terms-of-service BTW, I can flip around my use of dash and underscore if need be. Also, is it better to end the URLs with a slash or without? Thanks!

    Read the article

  • Using pscp and getting permission denied

    - by Espen
    I'm using pscp to transfer files to a virtual ubuntu server using this command: pscp test.php user@server:/var/www/test.php and I get the error permission denied. If I try to transfer to the folder /home/user/ I have no problems. I guess this has to do with that the user I'm using doesn't have access to the folder /var/www/. When I use SSH I have to use sudo to get access to the /var/www/ path - and I do. Is it possible to specify that pscp should "sudo" transfers to the server so I can get access to the /var/www/ path and actually be able to transfer files to this folder?

    Read the article

  • Virtual hosting all resolving to the same files

    - by nona urbiz
    I'm trying to set up virtual hosts on my VPS (centos). I set both domain nameservers to fns1.dnspark.net and fns2.dnspark.net and set an A record there for each domain pointing to my IP address 50.16.219.8. Both domains are currently resolving to the first virtual host. What am I doing wrong? Thanks! NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/root/dylanstestserver.com ServerName dylanstestserver.com ServerAlias www.dylanstestserver.com ErrorLog logs/dylanstestserver.com-error-log CustomLog logs/dylanstestserver.com-access_log common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/root/repthis.info ServerName repthis.info ServerAlias www.repthis.info ErrorLog logs/repthis.info-error-log CustomLog logs/repthis.info-access_log common </VirtualHost>

    Read the article

  • RapidSSL not trusted using the check on "why no padlock"

    - by Rippo
    On http://www.whynopadlock.com/check.php whilst testing the following url https://www.bobclubs.com/pay I get the following message:- ERROR: cannot verify www.bobclubs.com's certificate, issued by `/C=US/O=GeoTrust, Inc./CN=RapidSSL CA': Unable to locally verify the issuer's authority. I am not 100 sure why this is as all issuer is OK, all items are secure and I get a padlock on all browsers. Can any one shed some light on this?

    Read the article

  • Where is IIS7 redirect occuring?

    - by neildeadman
    I have a site that is set up in IIS7 and is working (although I don't understand how). The binding is www.mysite.fr on port 80. In the root of the site there are no files but several folders. The site loads fine but I don't understand how it is loading? If I go to https://www.mysite.fr/ it redirects to https://www.mysite.fr/fr/ which redirects back to https://www.mysite.fr/ So we end up in an endless loop which fails. Using fiddler it is a 301 redirect but I have no idea where this is set! I guess it might be in the website code but as I don't know which file is loaded first, I don't know where to look! Any ideas?? All other sites are working fine...

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • How do you configure IIS 7 to use a subdirectory as the default document?

    - by Mark Rogers
    So I have a website running on a discount asp.net account, and I put an asp.net mvc app in a subdirectory. If my url is 'www.website.com' and my app is in directory 'sample', then 'www.website.com/sample' will execute the mvc app. My problem is that I want the app to be shown when you go to 'www.website.com' not just 'www.website.com/sample'. I have access to the IIS Manager, and I'm sure there are many ways to do this. What's the best way to do this?

    Read the article

  • Why are my backlinks not showing on google on this asp.net website with all I've done?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Using mozilla firefox with utf-8 addresses (in greek) on mac

    - by Panagiotis
    Very often when I use firefox (any version from 10+) and I type my utf-8 seo url it behaves strangely. For example it randomly cuts the url and adds the url again at whole like this: http://www.mysite.com/????G????S/???? would make it as http://www.mysite.com/????G???http://www.mysite.com/????G????S/???? resulting in converting the url to urlencoded letters and 404 errors. I am using Lion with the latest firefox (yes I have uninstalled it once and reinstalled it).

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • nginx: Rewrite PHP does not work

    - by Ton Hoekstra
    I've a Suffix Proxy installed and I'm using the following rewrite with wildcard subdomain DNS on: location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; } } My suffix proxy has the following URL format: (subdomain and/or domain + domain extension to proxy).proxy.org/(request-uri to proxy) I've this php code in my index.php: if(preg_match('#([\w\.-]+)\.example\.com(.+)#', $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'], $match)) { header('Location: http://example.com/browse.php?u=http://'.$match[1].$match[2]); die; } But when requested a page with a .php extension I'll get a 404 not found error: http://www.php.net.proxy.org/docs.php - HTTP/1.1 404 Not Found http://www.utexas.edu.proxy.org/learn/php/ex3.php - HTTP/1.1 404 Not Found But everything else is working (also index.php is working): http://php.net.proxy.org/index.php - HTTP/1.1 200 OK http://www.php-scripts.com.proxy.org/php_diary/example2.php3 - HTTP/1.1 200 OK http://www.utexas.edu.proxy.org/learn/php/ex3.phps - HTTP/1.1 200 OK http://www.w3schools.com.proxy.org/html/default.asp - HTTP/1.1 200 OK Somebody has an answer? I don't know why it's not working, on apache it's working fine. Thanks in advance. I've removed the location and now it's working perfectly: if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; }

    Read the article

  • including files in a symlink directory when backing up with duplicity

    - by Rob
    I'm backing up using Duplicity, great tool. I'm unable to include files in the backup that are within a directory that is a symlink. Using the following: duplicity <dup args> --include /var/www/**/current --exclude '**' duplicity will only backup the symlink I've tried: duplicity <dup args> --include /var/www/**/current/* --exclude '**' # and duplicity <dup args> --include /var/www/**/current/** --exclude '**' Not even then symlink is backed up. the "current" directory links to directory like: /var/www/host.com/de9f2c7fd25e1b3afad3e85a0bd17d9b100db4b3 The files contains a few static html & css files. I want those files to be backed up, regardless of which sha'd directory "current" points to. Any help appreciated.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >