Search Results

Search found 19191 results on 768 pages for 'core location'.

Page 213/768 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Double try_files to solve the nginx's "No input file specified" issue

    - by Howard
    I am following the nginx's wiki (http://wiki.nginx.org/WordPress) to setup my wordpress location / { try_files $uri $uri/ /index.php?$args; } By using the above lines, when a static file which is not found it will redirect to index.php of wordpress, that is okay but.. Problem: When I request an non-existence php script, e.g. http://www.example.com/foo.php, nginx will give me No input file specified I want nginx to return 404 instead of the above message, so in the main fcgi config, I add the 2nd try_files location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; ... } And this worked, but I am looking if there are any better way to handle it?

    Read the article

  • WIth more mobile users, my geo ip database is becoming useless.

    - by Marius
    Hello there, I've been enjoying the benefits of Geo IP lookup from database for some time. Its great. People are increasingly trying to access my site from a mobile phones or 3G modems, and their physical location seems to have little relation to whereabouts my IP lookup tells me they are. A user who is on the east cost of my country, may be looked up as being in the far inland, or up north. And one user may be reported as being in one location in one moment, and seconds later, be 100s of kilometers away. This is becoming a problem, and I need to find a solution. I am already updating my database monthly, but it has little effect. What can be done? Thank you for your time. Kind regardsMarius

    Read the article

  • Django fails to find static files served by nginx

    - by Simon
    I know this is a really noobish question but I can't find any solution despite finding the problem trivial. I have a django application deployed with gunicorn. The static files are served by the nginx server with the following url : myserver.com/static/admin/css/base.css. However, my django application keep looking for the static files at myserver.com:8001/static/admin/css/base.css and is obviously failing (404). I don't know how to fix this. Is it a django or an nginx problem ? Here is my nginx configuration file : server { server_name myserver.com; access_log off; location /static/ { alias /home/myproject/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } Thanks for the help !

    Read the article

  • Dual Touchscreens in Different Rooms

    - by Ash
    I'm planning on having one dual-core system running two touchscreens, each in its own room in the house. I'd like to be able to use the internet on one, while someone else uses the other to record music - each of us interacting with the screen as we would separate computers. I was also thinking that running each screen, or certain programs, on its own core might make this work more smoothly. Will this setup work in Windows 7 Home on a mini-tower or do I need to invest in a server to get this sort of workstation/terminal setup to work?

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • I have to manually enter the pwd when I login at school and when I come back home with my laptop (DELL E6410)

    - by Bong Paduano
    I didn't have to do this the first 3 months of me going to school. My laptop used to be able to connect to my Home WiFi and school Wifi automatically then after the 3rd month something changed and eversince then I literally have to manually enter each of the Wifi passwords in each location each time I try to connect to their respective WLAN. I tried adding each WiFi networks in the "Manage wireless networks" section in the Network and Sharing Center but everytime I login to one of the location, the created network disappears and I did this more than once. I tried restoring the OS back but to no avail. Can someone please assist me on this issue. I'm not sure if this issue has been addressed in this website but I haven't seen any similar question in here. I appreciate any help anyone may be able to share.

    Read the article

  • How much power supply do I need for my server, and could a shortage be causing my odd crashing?

    - by dolan
    I have 5 servers, all with similar hardware (i7, four 2tb 7200rpm drives, two 4tb 5400rpm drives, 430 watt power supply), and lately the machines have been freezing up. This has gotten worse in the last day or so, and I can't pinpoint any explanation. One recent change was adding the two 4tb hard drives. The crashes happen most often while running a large Hadoop job, so I was originally thinking the load was causing some issues, but last night one server just froze without any heavy load on the box (or so I think), other than HDFS (Hadoop's distributed file system) was probably rebalancing itself since two of the five nodes were offline. If I plugin a monitor and keyboard to one of these frozen machines, I can't get any response or feedback on the screen. Any ideas on possible points of failure and/or different logs I can look at to investigate? Thanks Edit: The systems are running Ubuntu 10.04 Edit 2: More on hardware: intel core i7-930 bloomfield 2.8ghz processor (quad core) 12gb (6 x 2gb) kingston ddr3 1333 ram antec earthwatts green 430 power supply msi x58m lga 1366 motherboard

    Read the article

  • nginx public webdav server

    - by Gert Cuykens
    Can you check the user group from a $remote_user? location ~ ^/home/(.*)$ { alias /home/$remote_user/$1; auth_pam "Restricted"; auth_pam_service_name "nginx"; dav_methods PUT DELETE MKCOL COPY MOVE; dav_access group:rw all:r; create_full_put_path on; } location ~ ^/get/(.*)$ { alias /home/$1; #check the group of the $remote_user; } curl -T test.txt 'http://gert:[email protected]/home/' curl 'http://friend:[email protected]/get/gert/test.txt'

    Read the article

  • Server 2008 Active directory file sharing Restriction

    - by Usman
    My network scenario is: Two location connected each other through Wireless towers. 1st network ip address are 10.10.10.1 to 10.10.10.150 2nd network ip address are 10.10.10.200 to 10.10.10.254 Both location using their own ACTIVE DIRECTORY (MS SERVER 2008) and share their data with each other network through file sharing. My Problem is that every single client can explore the whole network. by simply typing \computername. i want to implement some restriction on network regarding file sharing. i want to restrict 1st Network Clients to open 2nd network Computer or server. Is there have any Group Policy or Folder permission scenario that i implement on my both network..

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Why and when to use Personal Package Archives (PPA)

    - by reversiblean
    Do you prefer PPA over core repositories and why or why not? Are there any compatibility issues when using a PPA as there are different distro releases but just one common repository? Where would you normally search for application repositories that are not in the core repositories? I.e., I was about to install Gnome Flashback in Ubuntu 12.04 which is the new classic version of earlier fallback but found that it's only available as a ppa-release and was wondering which one to choose between the two; fallback or newer flashback.

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • Trouble with mod_proxy and mongrel_rails

    - by x3ro
    Hey there I'm trying to set up a mod_proxy - mongrel combination, but somehow, apache/mod_proxy is unable to access mongrel locally. The following is my configuration for mod_proxy: ProxyRequests Off ProxyPreserveHost On <Location / ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ Order deny,allow Allow from all </Location Mongrel/Rails ist running just fine, because I can access it from my browser, and even with lynx on the server. However, I get the following error when trying to use the proxy: [error] [client 127.0.0.1] Invalid Content-Length I would appreciate any help :D PS: Oh, and the server is running Plesk to configure vhosts, if thats important.

    Read the article

  • Installing additional printer drivers (x86) on Windows 7 (x64)

    - by ambrax
    I've successfully installed Windows 7 (x64) and drivers for my Canon MP510, and have no problem printing with this setup. There is another PC in the network running XP SP3, and I want to share the printer so that users of that PC can also print. On W7 I have the option of installing additional printer drivers for other system architectures (Itanium and x86). I've downloaded the most current 32-bit drivers for the printer, but every time I direct the install dialog to the folder containing the drivers, I get the following error message: Selected printer driver not found The specified location does not contain the driver Canon Inkjet MP510 Printer for the requested processor architecture. Retry Cancel I'm stumped. I'm absolutely certain that the specified location actually does contain the correct drivers; I've even installed them on the XP system. I've tried everything I can think of. What am I overlooking?

    Read the article

  • How to rewrite using htaccess if the file exists in another folder?

    - by Jack
    We are trying to rewrite to another folder if the file does not exist in the document root, but does exist in the other folder. The other folder is in a completely different location, which is located using "Alias" in the vhosts. So, what we have so far (from this post How to rewrite URI from root if file exists in folder?) is: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/legacy/ RewriteRule ^(.*)$ legacy/$1 [QSA,L] This works to an extent, but seems to direct everything to the legacy folder, not just when the file doesn't exist in the first location and does exist in legacy. Thanks in advance for any help, Jack.

    Read the article

  • Restoring file properties but not the complete files, from backup

    - by Jon
    While copying data from my old storage on a Linux computer to the new (linux-based) NAS, I accidentially failed with getting the properties (most important: the modify dates) along to the new location. I also continued to use/modify the files at the new location and hence, cannot just copy it all over again. What I would like to do is a diff between files in the old vs. the new storage, and for those being identical, restore the properties from Linux storage to the NAS storage files. Is there a clever way such as a script or a tool to do this? I could either run it on the Linux box or in worst case from a remote Windows computer. Grateful for any suggestions. /Jon

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Apache mod_setenvif Server_Addr

    - by user18330
    I have an Apache server in a DMZ, reachaable on the LAN from 192.168.1.1, public 123.456.789.123. I'm trying to get it to require authentication if the inbound hits are coming from the public side. This doesn't seem to work: SetEnvIf SERVER_ADDR 123.456.789.123 local_nic=1 <Location /junk> Order Deny,Allow AuthName "Access required" AuthType Basic AuthUserFile /etc/httpd/conf/htpasswd Require valid-user </Location> What am I doing wrong? Sorry, HTML tags were wiping out my Apache directives.

    Read the article

  • Vim digraphs is not working

    - by Hauleth
    I've tried to input digraphs in Vim (without vi compatibility), and unfortunately I can't. After using ControlKP * it should input big greek pi letter. I've also tried using PBS * with :set digraph, but no success either. My gvim --version output: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 23 2012 18:42:18) Included patches: 1-712 Compiled by ArchLinux Big version with GTK2 GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua +menu +mksession +modify_fname +mouse +mouseshape +mouse_dec +mouse_gpm -mouse_jsbterm +mouse_netterm +mouse_sgr -mouse_sysmouse +mouse_urxvt +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra +perl +persistent_undo +postscript +printer -profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim +xsmp_interact +xterm_clipboard -xterm_save system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng15 -I/usr/local/include -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-O1,--sort-common,--as-needed,-z,relro -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXt -lX11 -lXdmcp -lSM -lICE -lm -lncurses -lnsl -lacl -lattr -lgpm -ldl -L/usr/lib -llua -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -fstack-protector -L/usr/local/lib -L/usr/lib/perl5/core_perl/CORE -lperl -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -L/usr/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -lruby -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib Can you tell me how to get my ControlK digraphs work? EDIT: Output of :verbose set enc? fenc? is: encoding=utf-8 Last set from ~/.vimrc fileencoding=

    Read the article

  • Simplification of Apache+Subversion multidirectory configuration

    - by Reinderien
    Hello. With your excellent advice, I've finally pieced together this functional Apache configuration for my Subversion service: # Macro to make an SVN repo set <Macro SVNDir $user> <Location /svn/$user> # Mandatory HTTPS, log in using Active Domain SSLRequireSSL AuthPAM_Enabled on AuthType Basic AuthBasicAuthoritative off AuthName "PAM" Require user AD\$user # Needed to squash spurious error messages AuthUserFile /dev/null # SVN stuff DAV svn SVNParentPath /var/www/svn/$user </Location> </Macro> # List of accounts Use SVNDir user1 Use SVNDir user2 # ... It works, but it isn't optimal. I'd like to somehow redo this so that it can just scan the list of directories in /var/www/svn and automatically do this for each of them. Is that possible? Thanks.

    Read the article

  • What are the default/recommendet access rights for %ALLUSERSPROFILE%?

    - by RED SOFT ADAIR-StefanWoe
    We have a Windows application that reads and writes some data for all users. We place it at %ALLUSERSPROFILE%\OurProgram*.* We now encounter a few cases in larger companies, where users do not have write permission to %ALLUSERSPROFILE%. Most of these cases are running Windows 7. The problem does not occur on a normal desktop installation of Windows 7 though. What is the recommended policy for this location? I have not found any "official" information about this. Is there a different location where all users have write permission?

    Read the article

  • Nginx proxy SOAP request

    - by user2606078
    looking for a right way to accomplish the following: there is an app that have URL(1) hardcoded and no way/time to change it in the source http://dev.server.com/example.com/admin/soap/action/index?pr=1 and it should use (and get response from) URL(2) http://example.com/admin/soap/action/index?pr=1 what should I configure in Nginx (apache as backup used) conf on dev.server.com in order to give that app when it asks URL(1) answer from URL(2)? On dev.server.com Apache has virtual host: dev.server.com enabled. Also I've tried to proxy in apache instead of nginx by using ProxyPass: <Directory /var/www/dev> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> <Location /example.com/admin/soap> ProxyPass http://example.com/admin/soap </Location>

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • No input file specified with nginx

    - by user66700
    I'm getting "No input file specified." when I attempt to browse to the phpmyadmin domain, not sure what I'm doing wrong.. using both php-fpm and php-cgi, php-fpm is currently working another directory fine..Had to change the port number to 8888 since -fpm was already using 9000 http://pastebin.com/kdEckiL3 from nginx.conf: server { listen 80; server_name phpmyadmin.domain.com; access_log /home/fanboy/logs/phpmyadmin.access_log; error_log /home/fanboy/logs/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi.conf; } }

    Read the article

  • How to use mod_proxy to let my index of Apache go to Tomcat ROOT and be able to browse my other Apache sites

    - by Dagvadorj
    I am trying to use my Tomcat application (deployed at ROOT) to be viewed from Apache port 80. To do this, I used mod_proxy, since mod_jk made me try harder. I used sth like this in httpd.conf: <location http://www.example.com> Order deny,allow Allow from all PassProxy http://localhost:8080/ PassProxyReverse http://localhost:8080/ </location> <Proxy *> Order deny,allow Allow from all </Proxy> And now I can not retrieve my previous sites on Apache, which was running prior to my configuration. How can I have both running?

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >