Search Results

Search found 16086 results on 644 pages for 'mod include'.

Page 502/644 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • Is Ubuntu a viable replacement of Windows XP for small enterprise environments?

    - by Alex. S.
    Hi all, I'm a newbie systems administrator, so any advice would be great. I would like to setup ubuntu 8.04 lts in a small office of consulting in management (around 50 workstations) instead of Windows XP. I would install MS Office 2007 via WINE (*). It would be a fresh installation, so the migration would be less of a pain. The new setup would also include a small server as document repository and a backup server by now. Later, I would install another goodies like a IM server, a document management solution, and whatnot collaborative tool. What do you advice in this scenario? Do you think is viable? Should I try to convince my managers this is a good idea? I consider myself as a fair experienced user in both systems, and I'm the only guy in charge of everything. I need to cut costs down, and I think that antivirus and antimalware software are a waste of money and time. Is this good idea?, or should I resign and try to lock down the Windows systems and install AV software? Is there anything else in this setup I'm not foreseeing? (*) The only catch in my test machine until now had been that Office SmartArt doesn't work properly, the rest of Office 2007 may seem ok.

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • Recommended Smartphone for Reading PDFs [closed]

    - by mika
    This is as much a software than a hardware question. I use a lot of public transport and perhaps the best way to spend your time there is to read while listening to music. Currently I use Nokia E90 and Adobe Reader LE 2.5 (full version). I was wondering if there are any better alternatives? Requirements: at least 640px wide screen, preferably 800px physical size of the LCD display matters, it should be large, but the phone itself should be as small as possible. This favors touchscreen models PDF reader should be of high quality. It should render most PDFs correctly. Other important features include: full screen mode, keyboard controls for Page Down and page change, multiple zoom levels to adjust to the screen, opening recent documents at the last page read Downsides of E90 + Adobe Reader LE Phone is large compared to the display It is hard to read the display at sunlight Adobe Reader crashes the phone regularly, zoom could have more levels, doesn't remember last page EDIT: Switched to iPhone and GoodReader. Smaller physical screen width compared to E90 is a disimprovement, but other than that I'm happy. GoodReader is the highest quality smartphone PDF reader I've seen so far.

    Read the article

  • Removing Paths/ Landing Pages From SharePoint Search Results

    - by j.strugnell
    Hi there, We've been asked by a client to remove a number of pages from being shown up in their public website search results page. I've been into the SSP and created Crawl Rules to remove these pages. All seemed to have worked ok but we have an issue in that landing pages are still showing up in their "www.domain.com/sitearea/" form but not in their "www.domain.com/sitearea/pages/default.aspx". For each of this type of page we have created one rule to "Exclude" the "aspx" path and another rule to include the "/" path but to "Follow links on the URL without crawling the URL itself". We tried adding rules to exclude the "/" format but that only resulted in all results underneath that being excluded. Does anybody know how to remove the "area/pages/default.aspx" and the "area/" pats from Search Results? I'm not sure if it's the "done thing" to ask 2 questions in one but this is in a similar vein so it should be ok. I was wondering if anyone knew of a tool (or if it is possible) to allow site admins to exclude pages from search results (not via SSP/Crawl Rules). I know they can do it at the site level but I was wondering if anything out there enabled this to be done at the page level through either Page or Site Settings?

    Read the article

  • Telugu (unicode) font rendering in emacs

    - by Prakash K
    [I asked the following question in stackoverflow, and I have been redirected here. I hope I can get some answers here. My question at stackoverflow had two small images showing the example rendering of text. As a new user at superuser, I am not being allowed to include them here, nor I am allowed to post more than one hyperlink. And, I don't have enough reputation on SO to migrate that question. Please look at the stackoverflow question for the images. Sorry about the inconvenience.] I sometimes edit text in telugu language. However, when I open the file (UTF-8 encoded) in GNU emacs (version 23.1.50.1 on Ubuntu Jaunty) the text rendering is incorrect. The same text file opened in gedit is rendered correctly. Here's a snippet: ????????? ???? ???? ???????? rendred in gedit: Please see the SO question for the image showing telugu text rendering in gedit And, the emacs rendering of the same text: Please see the SO question for the image showing telugu text rendering in emacs Wherever glyphs need to be composited (not sure if it's the right word), emacs (or whatever library it uses) is not doing it right. Is there anyway to fix this? Perhaps tuning some setting in my configuration? Any ideas, please?

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Apache2 403 permission denied on Ubuntu 12.04

    - by skeniver
    I have a sub-directory in my /var/www folder called prod, which is password protected. It was all working fine until I asked my server admin to help me set up allow all access to one particular file. Now the entire folder is just giving me a 403 error. This is the sites-enabled file: <VirtualHost *:80> ServerAdmin [email protected] # Server name ServerName prod.xxx.co.uk DocumentRoot /var/www/prod <Directory /var/www/prod> Options Indexes FollowSymLinks MultiViews +ExecCGI Includes AllowOverride None Order allow,deny AuthType Basic AuthName "Please log in" AuthUserFile /home/ubuntu/.htpasswd Require valid-user </Directory> <Directory /var/www/prod/xxx/cgi-bin/api.pl> Allow from All Satisfy Any </Directory> ScriptAlias /xxx/cgi-bin/ /var/www/prod/xxx/cgi-bin/ ErrorLog ${APACHE_LOG_DIR}/prod.xxx.error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/prod.xxx.access.log combined </VirtualHost> Now he's unsure why this is blocking me out completely. No permissions have been changed, but this is the /var/www/ folder: 4 drwxr-xr-x 2 root root 4096 Jan 3 21:10 images 4 drwxr-sr-x 4 root www-data 4096 Mar 31 14:47 jslib 4 drwxr-xr-x 7 root root 4096 Jun 2 13:00 prod When I try to visit http://prod.xxx.co.uk, I don't get asked for the password; I just get 403'd I hope I've given enough information... Anyone able to spot something he can't?

    Read the article

  • Coloring of collapsed threads in mutt

    - by Rich
    I'm trying to figure out the syntax of colouring collapsed threads in the mutt index. The documentation for mutt patterns doesn't seem to include a description of how this works, and so far I've been completely unable to figure it out by trial and error. What I'd like is for collapsed threads that contain any unread (new) messages to be always coloured green. If collapsed threads with no unread messages contain any flagged messages, then I'd like them to be red. So far, every set of patterns I've tried results in threads that contain both flagged and unread messages being coloured red (I want them green). These work: color index green default "~N" # unread messages color index green default "~N~F" # unread flagged messages color index red default "~F" # flagged messages color index green default "~v~(~N)" # collapsed thread with unread But these don't: color index green default "~v~(~N~F)" # attempt to keep threads with unread green color index red default "~v~(~F)" # colours collapsed threads with flagged and unread red color index red default "~v~(!~N~F)" # ditto color index red default "~v~(^!~N~F)" # ditto color index red default "~v~(~F)~(!~N)" # ditto color index red default "~v~(~F)~v~(!~N)" # ditto I've also tried switching the order of the "~v~(~F)" and "~v~(~N)" commands in the file, but the "flagged" rule always seems to take precedence over the "new" rule. Ideally I'd like to understand how the syntax for colouring collapsed threads works, but at this point I'd happily settle for a set of rules that achieves the colourscheme described above.

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • Why won't Media Monkey add one particular folder of mp3's?

    - by ChrisF
    I'm using the latest and greatest version on Media Monkey (free version) and it won't find the mp3's in one particular folder in my music tree. It can see all the other files in the tree and the folder shows up when I click Add/Rescan files to the library... I have full control over the folder and all the files in the folder. The files play in Windows Media Player. The files play in Media Monkey if I right click and play from the context menu. All the tracks are at least 2 minutes long and over 5MB long and Media Monkey is set to ignore files shorter than 20KB and include all files regardless of length. There was an issue in that the that the genre of the tracks was set to "Classical" and the option that allows you to browse the classical music independently of the other music isn't enabled in the free version. It's a Gold version option only. I hadn't spotted that my other classical music was also missing from the library (I have rather a large library). Once I retagged the music with a different tag and tried to add the files again it reported that it added the tracks, but they still didn't show up in the library.

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • Having trouble with a workaround, for booting from a usb stick, using grub and a minimal linux kernel to load usb drivers

    - by s hanley
    I'm trying to boot from a usb stick. I formatted it to fat32, and later to ext2, and installed dsl on it using unetbootin, and later the usb install guide on dsl wiki (http://www.damnsmalllinux.org/wiki/index.php/Install_to_USB_From_within_Linux). The bios doesn't have a setting for booting from usb. Grub doesn't "see" the usb drive when I use the root and find commands, explained in (http://www.damnsmalllinux.org/wiki/index.php/USB_Booting). This happens even when I set boot from floppy at the top of the boot order. However, my usb keyboard is recognised by the bios and by grub. How can it recognise the keyboard but not the usb drive? Also, the usb led does flash even before grub starts up, so surely something must be happening usb-wise? I am now following an ubuntu guide to booting from a USB stick, using a hdd-based, minimal linux kernel to supply the usb drivers. But I'm having difficulty adapting it to other OSes (slax/dsl/aptosid). I believe I have to alter the initrd.gz file to include usb drivers and then copy that file along with vmlinuz to a partition on my hdd. But, what's the grub command for the kernel line supposed to look like? From the ubuntu example it's: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper noprompt cdrom-detect/try-usb=true persistent initrd /boot/usb-boot/initrd.lz boot Should mine just be: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz cdrom-detect/try-usb=true initrd /boot/usb-boot/initrd.lz boot

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . Thanks.. Ian

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >