Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 851/974 | < Previous Page | 847 848 849 850 851 852 853 854 855 856 857 858  | Next Page >

  • Apache local verses external (domain)

    - by Jessy Houle
    I have an Apache server running on Ubuntu server 10, using Passenger for Ruby on Rails. I have configured my site under the sites-enabled directory of Apache and can hit the server with an internal IP address (192.168.X.X) and the site comes back as expected. However, whenever I try to hit the site externally, either through the domain name or the IP address tied to the domain name, the site will not come back. I have a router in the middle with a static IP address, with Port Forwarding turned on (forwarding 80/443) to the server and I'm quite confident the issue isn't there. In fact, I even DMZed to the Ubuntu Server just to make sure. Also, all router firewall options have been turned off. So here is the question... Is there something else I have to do with Ubuntu server to allow externally requested port 80 traffic? Otherwise, is there some settings that need to be set in Apache to allow domain or external IP address port 80 traffic through? I'm pretty new to Apache, so, please take it a bit easy on me :-) Thank you for your responses. -Jessy Houle

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Playback random section from multiple videos changing every 5 minutes

    - by brian
    I'm setting up the A/V for a holiday party this weekend and have an idea that I'm not sure how to implement. I have a bunch of Christmas movies (Christmas Vacation, A Christmas Story, It's a Wonderful Life, etc.) that I'd like to have running continuously at the party. The setup is a MacBook connected to a projector. I want this to be a background ambiance thing so it's going to be running silently. Since all of the movies are in the +1 hour range, instead of just playing them all through from beginning to end, I'd like to show small samples randomly and continuously. Ideally the way it would work is that every five minutes it would choose a random 5-minute sample from a random movie in the playlist/directory and display it. Once that clip is over, it should choose another clip and do some sort of fade/wipe to the chosen clip. I'm not sure where to start on this, beginning with what player I should use. Can VLC do something like this? MPlayer? If there's a player with an extensive scripting language (support for rand(), video length discovery, random video access). If so, I will probably be able to RTFM and get it working; it's just that I don't have time to backtrack from dead-ends so I'd like to start off on the right track. Any suggestions?

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • Troubles with apache and virtual hosts

    - by xZero
    I have a BIG problem. I have VPS with Debian OS, and LAMP installed. Fresh install. For control panel i using Webmin. Now i trying to setup multiple sub-domains on my server using webmin for example: downloads.my-domain.com cpanel.my-domains.com forum.my-domains.com But problem what is happening is next, while i using no virtual hosts, everything works perfectly when i accessing it using my-domain.com, but when i add Virtual host, i cann access it but my-domain.com becomes unavilable because it redirects to virtual hosts i added. When i add more than 2 virtual hosts, problem is still here. Also, when i try to access to virtual server for example downloads.my-domain.com, it redirects again to cpanel.my-domains.com When i delete virtual hosts, access to my-domain.com is succesfull... What i known: - That is not problem with my domain provider. I correctly added subdomains and added host record to my VPS IP. - I added unique name to every single virtual host. - There are no two same virtual hosts - Every virtaul hosts have own directory: for example: downloads.my-domain.com have own WWW dir: /var/downloads Can somebody help me? Thanks.

    Read the article

  • How to delete a file that contains a backslash in the name under Windows 7?

    - by espinchi
    I want to delete a file named workspaces\google-gson-1.7.1-release.zip Yep, it contains a backslash in the name. Here it is: G:\>dir Z_DRIVE Volume in drive G is samsung Volume Serial Number is 48B9-7E1D Directory of G:\Z_DRIVE 04/06/2012 08:09 PM <DIR> . 04/06/2012 08:09 PM <DIR> .. 05/01/2011 02:21 PM 528,016 workspaces\google-gson-1.7.1-release.zip 1 File(s) 528,016 bytes 2 Dir(s) 88,400,478,208 bytes free The first attempt is to just delete it from the Windows Explorer, but it says it can't find the file. Then, I tried from the command line: G:\>del Z_DRIVE\workspaces\google-gson-1.7.1-release.zip The system cannot find the file specified. And, after researching a bit in the internets, I also tried the following, with no luck: G:\>del \\?\G:\Z_DRIVE\workspaces\google-gson-1.7.1-release.zip The system cannot find the file specified. Other than booting from some Linux CD, is there a way to get rid of this file? Update: also tried the following combinations, but the error is the same: G:\>del "\\?\G:\Z_DRIVE\workspaces\google-gson-1.7.1-release.zip" G:\Z_DRIVE>del workspaces\google-gson-1.7.1-release.zip G:\Z_DRIVE>del "workspaces\google-gson-1.7.1-release.zip" G:\Z_DRIVE>del workspaces*google-gson-1.7.1-release.zip

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Serving static content with Apache web server and Tomcat

    - by Hunter
    I've configured Apache web server and Tomcat like this: I created a new file in apache2/sites-available, named it "myDomain" with this content: <VirtualHost *:80> ServerAdmin [email protected] ServerName myDomain.com ServerAlias www.myDomain.com ProxyPass / ajp://localhost:8009 <Proxy *> AllowOverride AuthConfig Order allow,deny Allow from all Options -Indexes </Proxy> </VirtualHost> Enabled mod_proxy and myDomain a2enmod proxy_ajp a2ensite myDomain Edited Tomcat's server.xml (inside the Engine tag) <Host name="myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> <Host name="www.myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> This works great. But I don't like to put static files (html, images, videos etc.) into {tomcat home}/webapps/myApp's subfolders instead I'd like to put them the apache webserver's root WWW directory's subdirectories. And I'd like Apache web server to serve these files alone. How could I do this? So all incoming request will be forwarded to Tomcat except those that ask for a static file.

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • How do I troubleshoot nginx not recognizing passenger?

    - by Jade
    Issue: nginx does not seem to recognize my rails application Symptoms: When the server starts up, it shows the "Welcome to nginx!" message instead of my Rails application. Nginx seems to be using the local nginx path instead of the Rails root I specified: 2010/04/18 06:29:06 [error] 783#0: *1 "/usr/local/nginx/html/blog/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: www.farmerjade.com, request: "GET /blog/ HTTP/1.1", host: "www.farmerjade.com" I used [RVM and Passenger Setup on NGINX][1] to install nginx and passenger on a virtual machine. Here is my nginx configuration: user farmerjade; worker_processes 1; ... http { include mime.types; default_type application/octet-stream; passenger_ruby /home/farmerjade/.rvm/bin/passenger_ruby; passenger_root /home/farmerjade/.rvm/gems/ree-1.8.7-head/gems/passenger-2.2.11; ... server { listen 80; server_name www.farmerjade.com; root /home/farmerjade/farmerjade/public; passenger_enabled on; rails_env development; ... I'd appreciate any help anyone has to offer -- I'm quite new to nginx.

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • map linux drives to windwos 7 for media stream over internet

    - by Ortix92
    I'm trying to map a linux network drive to my windows 7 laptop, however this laptop is not on LAN. At home, I simply use Samba, but this obviously won't work over the internet. I'm trying to avoid VPN, so if there are other solutions, I would like to know about them. The reason I ask is because my university does this as well. We can simply map folders to our computers without VPN connections. I'm not sure what they are running as servers. The main reason is because I want to be able to access my files stored on my home server wherever I go. They are located in the /home/ folder (videos, music and pictures folder). I'm trying to keep my websites and media separate from each other. I wouldn't mind accessing them from a web interface either, but I would like to keep the directory structure intact. I remember having an app like that come with winamp and running it on my windows pc (As the server). Unfortunately it doesn't work for linux. Any ideas on what I could use? Would XBMC be able to help me out with this? I did do some researching but I couldn't find any concrete answers

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • How can I create a simple Exchange 2010 backup solution?

    - by bduncanj
    I'm sure this question's been asked a dozen times in one form or another, however after much searching, there doesn't appear to be an obvious simple recovery solution for a single Exchange box. We're using Exchange 2010 on a single server, the server hosts the AD and nothing else on the network uses the AD. The intent is to run this server as you would an externally hosted Exchange server - access only via HTTP (RPC mode or OWA) - all other ports blocked. I've a daily backup running, using Windows Server 2008 volume shadow service to backup the Exchange data to an external hard disk. My question is, how do I perform a bare metal recovery of this server? 1) Do I need to be explicitly including the active directory information in this nightly backup, or will it be there by virtue of the fact that this system is the primary AD server and the Windows backup service knows this? 2) I understand I can re-install Server 2008 onto my new hardware (in the case of hardware failure) and then run Exchange 2010 setup.exe with a /recover argument, referencing the backup volume. 3) It is acceptable to have some downtime during this recovery process. But is there anything else I should be aware of? Thanks! Duncan

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

< Previous Page | 847 848 849 850 851 852 853 854 855 856 857 858  | Next Page >