Search Results

Search found 7850 results on 314 pages for 'except'.

Page 210/314 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Why isn't Apache Basic authentication working?

    - by Brad
    I just upgraded Apache from it's 2003 build, to a squeaky-clean, brand-new 2.4.1 build. All seems pretty good except for one glaring thing: In my httpd.conf file I have the following: <Directory /> AllowOverride none Options FollowSymLinks AuthType Basic AuthName "Enter Password" AuthUserFile /var/www/.htpasswd Require valid-user </Directory> This should allow only users in the specified auth file to access the server - just as it had under the older version of Apache. (Right?) However, it's not working. Requests are granted with no authentication provided. When I switch logging to LogLevel Debug, for the accesses, it says: [Sat Mar 24 21:32:00.585139 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of Require all granted: granted [Sat Mar 24 21:32:00.585446 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of <RequireAny>: granted I really don't know what this means - and I (to the best of my knowledge) don't have any "Require all granted" or "" statements in any of my files. Any ideas why this isn't working, or where to debug??

    Read the article

  • Gaming blew fuse: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting.

    Read the article

  • Nginx settings are screwing up my Drupal form submissions, how do I fix?

    - by bflora
    How do I tell Nginx to "ignore" specific URLS or pages on my web site? I run a Drupal site where anonymous visitors get served via NGINX while logged in users get served via Apache. We do this to keep the load down and scale better. It works great, except, since we set up nginx, a good number of Drupal forms no longer work. For example, before installing Nginx, if you created a new article, then clicked "edit" and edited the article. You could click "save" and your changes to the article would be saved. After setting up nginx, when you make edits and then click "save," the page simple refreshes, but now with "nginx-index.php" inserted into the URL. And your changes to the form were not actually saved to the database. So if you go to edit an article, you'll be on domain.com/node/##/edit or something like that. When you try to save your changes to the form, you'll wind up at domain.com/nginx-index.php?q=node/##/edit. And your changes will not be saved. There is a way around this, but only for administrative users. If you go to a form where this problem is happening, then comment or comment-out three lines in our settings.php file, the form will save properly. Those three lines are: // 'cache_form' = array( // 'engine' = 'db', // ), If they're commented, you uncomment them, them save the form. If they're uncommented, you comment them out and save the form. Obviously, this sucks. My friend who set up our server (and then left the country) told me that there are some Nginx settings that can tell it to "ignore" certain URLs or pages which could work here. How do I do this and where do I do it?

    Read the article

  • reverse proxying with NGINX to two back-end servers

    - by aag
    I am trying to learn how to configure the Nginx proxy. All requests from external (www.external.com) should go to internal server 10.10.10.16:2080, except for www.external.com/nagios requests, which should go to internal 10.10.10.18. My location block looks as follows: location ~* / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.16:2080; } # # nagios server location ~* /nagios/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.18; } The first location seems to work fine. However, any request to www.external.com/nagios sends the browser into the eternal pastures. Of course, 10.10.10.18/nagios was tested and works fine. What am I missing?

    Read the article

  • I've got very brazen pop3 attack how to protect the server?

    - by Ken Tang
    Today I have brazen attack to my pop3-dovecot server and mail log is full over (200MB) with this kind of information: Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<shawn>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<shop>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<sitetest>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<solar>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:15 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<services>, method=PLAIN, rip=200.233.152.111, lip=myip I just blocked attacker's ip by iptables -A INPUT -s 200.233.152.111 -j DROP But it this can be continued anytime from other ips. My question is: Is there any method to disallow anyone to connect my pop3 server (except only me?) because my ip is dynamic from ISP side so I don't know how to make pop3 server know that it is exactly me connecting to. Thank you in advance!

    Read the article

  • How to fix GMail time stamps in Outlook?

    - by SWB
    One of my email accounts is hosted at an ISP with unreliable IMAP support, and I can't change it. Fortunately, I have my personal email set up on Google Apps for Domains, so I created another GMail account there and turned on GMail's features that allow me to send and receive mail through the ISP account using GMail ("Send mail as" and "Get mail from other accounts" in GMail settings on the Accounts tab). I'm now using Outlook to retreive mail from the GMail account through IMAP, which in turn is retreiving mail from the ISP account through POP3. This basically works great, except for one very significant issue: Prior to setting this up, I already had several months of mail in the ISP account that I had been accessing via IMAP. GMail grabbed all of this mail via POP3 at, let's say, noon on April 5. In GMail's web interface (and on my iPod touch, and in Mozilla Thunderbird), all is well: the messages are all shown with their original time stamps. But when Outlook downloads these messages from GMail via IMAP, the time stamps are all set to noon on April 5 (the time GMail downloaded them from the ISP via POP3). That's not good, especially since we're talking about hundreds of messages here over a time span of several months. How can I fix this and get Outlook to display the original time stamps?

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Disable touch pad for mouse button region on new HP pavillion models?

    - by John
    i bought a new hp pavillion dv6 series laptop. the laptop itself is fine but it has the new hp touchpad mouse which i absolutely hate. its such a stupid problem to have with a computer. the left and right mouse buttons are, themselves, part of of the touchpad, meaning that if i tap the buttons without actually pressing them down, it registers the same way as the mouse pad (the cursor moves, tap to click activates, etc.) this is a major annoyance because it prevents you from operating the mouse pad with anything more than a single finger; if for example i use my right hand index finger to move the cursor using the touch pad and rest my left hand index finger on the left mouse button for more efficient mouse-ing, the mouse will react as if im trying to use 2 fingers to move it and it will either just sit there or will spaz out. the only way this works is if i keep the finger that is resting on the mouse button absolutely still, which is very difficult and, therefore, very annoying. also, even if i do abide by the arbitrary new decree of single-finger mousepad operation, i still have a problem because when i press down on the left or right click buttons, the mouse moves slightly, what with the buttons also being part of the touch pad and all. this would not be that hard to avoid except that they decided to also make the buttons much harder to press down. now whenever i go to click something, i press hard on the mouse button, causing my finger to slightly move or roll or flatten out a bit, causing the cursor to move slightly, and causing me to click on something different. what i would like to know is if there is anyway that i can disable the touch pad on the buttons. i have gone through all of the settings under the synaptics menus but i cannot find anything about this. did i miss something in one of the menus? if not, then are there any updated drivers that allow for toggling of this function?

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Redirection of outbound UDP port.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. I tried, but can't seem to get iptables to do what I want. I'm not sure if it's my lack of skill, or if I'm trying the wrong solution. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • one 16K random read I/O issues 2 scsi I/O (16K and 4K) requests in linux

    - by hiroyuki
    I noticed weird issue when benchmarking random read I/O for files in linux (2.6.18). The Benchmarking program is my own program and it simply keeps reading 16KB of a file from a random offset. I traced I/O behavior at system call level and scsi level by systemtap and I noticed that one 16KB sysread issues 2 scsi I/Os as following. SYSPREAD random(8472) 3, 0x16fc5200, 16384, 128137183232 SCSI random(8472) 0 1 0 0 start-sector: 226321183 size: 4096 bufflen 4096 FROM_DEVICE 1354354008068009 SCSI random(8472) 0 1 0 0 start-sector: 226323431 size: 16384 bufflen 16384 FROM_DEVICE 1354354008075927 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 21807710208 SCSI random(8472) 0 1 0 0 start-sector: 1889888935 size: 4096 bufflen 4096 FROM_DEVICE 1354354008085128 SCSI random(8472) 0 1 0 0 start-sector: 1889891823 size: 16384 bufflen 16384 FROM_DEVICE 1354354008097161 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 139365318656 SCSI random(8472) 0 1 0 0 start-sector: 254092663 size: 4096 bufflen 4096 FROM_DEVICE 1354354008100633 SCSI random(8472) 0 1 0 0 start-sector: 254094879 size: 16384 bufflen 16384 FROM_DEVICE 1354354008111723 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 60304424960 SCSI random(8472) 0 1 0 0 start-sector: 58119807 size: 4096 bufflen 4096 FROM_DEVICE 1354354008120469 SCSI random(8472) 0 1 0 0 start-sector: 58125415 size: 16384 bufflen 16384 FROM_DEVICE 1354354008126343 As shown above, one 16KB pread issues 2 scsi I/Os. (I traced scsi io dispatching with probe scsi.iodispatching. Please ignore values except for start-sector and size.) One scsi I/O is 16KB I/O as requested from the application and it's OK. The thing is the other 4KB I/O which I don't know why linux issues that I/O. of course, I/O performance is degraded by the weired 4KB I/O and I am having trouble. I also use fio (famous I/O benchmark tool) and noticed the same issue, so it's not from the application. Does anybody know what is going on ? Any comments or advices are appreciated. Thanks

    Read the article

  • Create custom launchers in GNOME 3

    - by hochl
    I'm using Debian testing, and I have been switched to GNOME 3 by the Debian update yesterday. I'm not very comfortable with the UI. I wanted to customize everything like I had it with GNOME 2, but I simply couldn't find any way to change preferences like I'm used to. I've digged some, but all answers I could find did not help me achieve my goals. So please, if anyone knows the solution to this I'd be thankful: 1) I want several launchers that launch terminals, with different arguments and different coloring/title. I have searched everything and there seems to be no menu, no right-click, nothing which is standard in any UI I know. How can I create several launchers in this bar on the left side that launch the same application, just with different parameters? With GNOME 2 this was a piece of cake. 2) I want to switch between different terminals using ALT-TAB. Right now, I'm always just getting to the same, already-opened terminal. When I open two terminals by simply creating the second one by issuing xterm &, I still get one Terminal entry with ALT-TAB, and I have to navigate with cursor keys or mouse wheel to select one of the two xterminals. Instead, I want to open a new terminal when I click the quick launch terminal icon from the bar on the left side of the screen and navigate through them like on KDE/GNOME 2/Windows/any reasonable UI. Can this be done? 3) Is there a trick to make bluetooth devices work like on GNOME 2? Right now, my BT keyboard won't pair anymore, which, as you can imagine, makes me pretty angry. and, if anything fails: 4) How can I switch back to GNOME 2 again? :-) Honestly, who did design this? What were they smoking? I feel like I'm not allowed to do anything except start one of any application that has an icon and just with the default parameters. That can't be true, right? I feel massively restrained by this stuff :(

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • Google Apps, SPF, softfail problem (validates with validation tools, but still softfails otherwise)

    - by mq.chen
    Hi, I guess this is probably a commonly asked and boring question but I'm really at a loss and I don't know what else to do. This might be a duplicate of other questions, but none of the solutions worked for me. I've Googled around and read just about anything I could find but I'm still puzzled as to why it doesn't work. The gist of my problem is that I have set-up Google Apps for a client of mine with the domain fintan.dk. Everthing works just excellent, except emails sent from *@fintan.dk (either with the Gmail web-interface or desktop client) to a non-Google Apps email gets a softfail (I have sent to my University email, an email hosted at MediaTemple and even Hotmail). The emails gets a pass when sent to a Google Apps or Gmail address though... (All emails from that domain are sent via email clients.) So this is what I have done so far: I've added the SPF record Google recommended (v=spf1 include:_spf.google.com ~all), waited several days hoping it would a DNS update delay problem. Now, three days later there is no change. I have verified the settings in the desktop clients several times. I have validated the records with validation tools like the SPF Query Tool, [email protected] and [email protected]. All of them validate and gives a pass, saying there shouldn't be a problem, but strangely there still is. So, I really don't know what else to do. Any help is very much appreciated. Thank you in advance!

    Read the article

  • Serious 64-bit laptop

    - by Daniel Gehriger
    For the past couple of years, I have been using an IBM Thinkpad T60p for daily work (software development, desktop & embedded). I am extremely satisfied with this machine, due to its robustness. It also has a few features I depend on: a high resolution display: 15.0" TFT FlexView display with 1600x1200 (UXGA); excellent keyboard; decent graphics and CPU performance. Some of the software I develop benefits from larger amounts of RAM, and 3GB (Windows 7 32-bit) or 4GB (Windows 7 64-bit on T60p) are no longer sufficient. My customers run desktop computers with 20GB and more, and I need to have at least 8GB to at least be able to run reasonable test cases. So I'm shopping around for a new laptop, but I'm struggling to find anything that matches my requirements: must run Windows 7 64-bit Pro or higher; must support at least 8GB of RAM (more is better) high screen resolution! While I prefer 4:3 I can live with wide screen. But I really hope to find something with a vertical screen resolution similar to what I have now... portable, so < 16" but = 14" I realize that FlexView isn't available anymore, but I'd like to avoid a glossy screen if possible. decent (not more) graphics performance, ideally hybrid (I'm doing a lot of CAD, never games). good keyboard reasonable CPU -- but I'm still fine with my current Core 2 Duo, so that shouldn't be too complicated. The T60p fits all those requirements, except the 8GB of RAM. Can you help me find a current notebook that would match most of them? I don't mind changing brand. Thanks!

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • How do I get a network printer installed in ubuntu 9.04?

    - by SoaperGEM
    My girlfriend's work computer now has Linux on one of the partitions, and for the most part it's running fine--except that I can't seem to get the network printers configured right. There are two of them: a Lanier MP 7500/LD275 and a Lanier MP C3000/LD430c, and Linux seems to have found them both automatically. I'll go through the steps of what I did, and what exactly went wrong. I went to Administration Printing, and clicked the new printer button. It searched for printers and found them both, listed under "Network Printers." I added them as new printers in succession. However, when I clicked "Print a Test Page," it failed saying there was a broken pipe. The device URIs were saved as socket://[ip address]:9100. I changed these to lpd://[ip address] per some online tutorial, which at first looked like it might have worked (but didn't). Then when I tried to print a test page, it first said Processing (and sometimes even Processing - printing test page, 4%, but always subsequently displays Idle - /usr/lib/cups/backend/lpd failed. Help! What do I do? It seems like Linux can find these printers just fine, and the drivers seem to be in place, so what's going wrong?

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Serving static content with Apache web server and Tomcat

    - by Hunter
    I've configured Apache web server and Tomcat like this: I created a new file in apache2/sites-available, named it "myDomain" with this content: <VirtualHost *:80> ServerAdmin [email protected] ServerName myDomain.com ServerAlias www.myDomain.com ProxyPass / ajp://localhost:8009 <Proxy *> AllowOverride AuthConfig Order allow,deny Allow from all Options -Indexes </Proxy> </VirtualHost> Enabled mod_proxy and myDomain a2enmod proxy_ajp a2ensite myDomain Edited Tomcat's server.xml (inside the Engine tag) <Host name="myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> <Host name="www.myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> This works great. But I don't like to put static files (html, images, videos etc.) into {tomcat home}/webapps/myApp's subfolders instead I'd like to put them the apache webserver's root WWW directory's subdirectories. And I'd like Apache web server to serve these files alone. How could I do this? So all incoming request will be forwarded to Tomcat except those that ask for a static file.

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • Matching up HomeGroup users created in different versions of Windows

    - by pdr
    In our household, we have 2 laptops and a desktop, all currently on Windows 8.1. The desktop has been upgraded in stages from Windows 7. Laptop 1 was bought with Windows 8 and has now been upgraded. Laptop 2 is new and came with Windows 8.1. We've been trying to set up a HomeGroup such that we have one user on laptop 1, one on laptop 2 and both can log into the desktop, with each user able to edit their files on their laptop and desktop, while the other user has read-only access. And we now have that situation ... except ... because the user on laptop 1 was created in Windows 8 but the same user on the desktop was created in 8.1, they have different names in Windows Explorer (say, firstuser and first_000). Likewise, the other use was created on laptop 2 in Windows 8.1 and on the desktop in Windows 7 (so let's say secon_000 and seconduser). This is ultimately confusing. Now if we expand the HomeGroup, we get four users (firstuser, first_000, seconduser and secon_000) and each has a single computer inside it. What I'd like to see is firstuser = laptop1, desktop and seconduser = laptop2, desktop. An acceptable alternative is first_000 = laptop1, desktop and secon_000 = laptop2, desktop. But what I don't want to have to do is delete firstuser and seconduser and rebuild them. Is there a better way?

    Read the article

  • Configuring two nearby WLANs: should I use the same ssid?

    - by Rory
    I'm configuring a home network for basic internet use (ie don't really need connectivity between workstations on the network). My brick walls mean a single wireless router doesn't provide good coverage throughout the house, so I have purchased two powerline adapters and now have the incoming modem/wireless router at one end of the house plugged into a powerline adapter, and at the other end of the house the other powerline adapter plugged into another wireless router. Currently the two wireless networks have different ssids. (The powerline adapters only do power-Ethernet; they're not wireless access points themselves.) This works well, except when I move between rooms and would ideally like my devices (iPad, phones, laptops) to switch from the weak to the strong signal. Sometimes there's enough signal that they hold on to the weak connects instead of switching to the strong one. Should I name the two networks the same ssid, and if so what is the actual effect? Do the signals get confused, is the bandwidth affected, will this help my devices seamlessly move from one to the other, or is the ssid just a cosmetic thing that actually doesn't have any impact on this situation? Are there any other settings that I should configure to make my setup optimal?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >