Search Results

Search found 19191 results on 768 pages for 'core location'.

Page 271/768 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • XCOPY access denied error on My Documents folder

    - by Ryan M.
    Here's the situation. We have a file server set up at \fileserver\ that has a folder for every user at \fileserver\users\first.last I'm running an xcopy command to backup the My Documents folder from their computer to their personal folder. The command I'm running is: xcopy "C:\Users\%username%\My Documents\*" "\\fileserver\users\%username%\My Documents" /D /E /O /Y /I I've been silently running this script at login without the users knowing, just so I can get it to work before telling them what it does. After I discovered it wasn't working, I manually ran the batch script that executes the xcopy command on one of their computers and get an access denied error. I then logged into a test account on my own computer and got the same error. I checked all the permissions for the share and security and they're set to how I want them. I can manually browse to that folder and create new files. I can drag and drop items into the \fileserver\users\first.last location and it works great. So I try something else to try and find the source of the access denied problem. I ran an xcopy command to copy the My Documents folder to a different location on the same machine and I still got the access denied error! So xcopy seems to be denied access when it tries to copy the My Documents folder. Any suggestions on how I can get this working? Anyone know the reason behind the access denied error?

    Read the article

  • switchover in postgresql

    - by user1010280
    I am using Postgresql 9.0 with Streaming replication. So, during switchover I follow these steps:- Get the server timestamp on primary. Get the current log position on primary. Set Verify Log location Verify Transaction Received Location Shutdown DB on production. Synchronize the transaction logs from PR to DR. Trigger a failover on the DR Database by creating the trigger file specified in recovery.conf Verify DB Mode on DR Copy the control file from from DR to primary. copy the temporary stats file from DR to primary. copy the history file from DR to primary. Create recovery.conf file. Start Database in standby mode in primary. Verify DB mode on PR At step (6), I have to copy last wal generated on Primary to standby and sync both PR and standby. but this thing takes time to copy files because this remote. So that postgres will keep seraching for wal for long time and after that it stops the server. So I want to know is there any way so that I can ask postgres to stop seraching or locating WAL after shutdown??? because postgres tries to locate this wal every 5 seconds. Please reply as soon as possible..its urgent...

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Setting up SSL on JBoss 5

    - by socal_javaguy
    How can I enable SSL on JBoss 5 on a Linux (Red Hat - Fedora 8) box? What I've done so far is: (1) Create a test keystore. (2) Placed the newly generated server.keystore in $JBOSS_HOME/server/default/conf (3) Make the following change in the server.xml in $JBOSS_HOME/server/default/deploy/jbossweb.sar to include this: <!-- SSL/TLS Connector configuration using the admin devl guide keystore --> <Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="${jboss.server.home.dir}/conf/server.keystore" keystorePass="mypassword" sslProtocol = "TLS" /> (4) The problem is that when JBoss starts it logs this exception (during start-up) (but I am still able to view everything under http://localhost:8080/): 03:59:54,780 ERROR [Http11Protocol] Error initializing endpoint java.io.IOException: Cannot recover key at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:456) at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:139) at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:498) at org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:175) at org.apache.catalina.connector.Connector.initialize(Connector.java:1029) at org.apache.catalina.core.StandardService.initialize(StandardService.java:683) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:821) at org.jboss.web.tomcat.service.deployers.TomcatService.startService(TomcatService.java:313) I do know that's there's more to be done to enable full SSL client authentication....

    Read the article

  • Recommend a free temperature-monitoring utility for cores + video card, on Vista?

    - by smci
    Looking for your recommendations for a free temperature-monitoring utility, for my PC (Core 2) and graphics card for Vista. (Question reposted with the hyperlinks now I have 10 reputation). I don't want all the geeky details, I don't overclock, I don't see the need to mess with my fan speeds or motherboard settings, I just want something fairly basic to help with basic troubleshooting of intermittent overheats on video card and/or mobo: must run on Windows Vista (yes, don't laugh). ideally displays temperature when minimized to toolbar, and/or: automatically alerts me when temperature on either core or the video card exceeds a threshold ideally measures temperature of video card and system as well, not just the cores. HDD temperature is not necessary I think. logging is nice, graphs are also nice portability to Linux and Mac is nice Apparently Everest is the best paid option, but I'm not prepared to spend $40. I found the following free options, but no head-to-head at-a-glance comparison: CoreTemp (only does cores, not video card?) Open Hardware Monitor (nice graphs, displays when minimized to toolbar, no alerts) RealTemp (has alerts, works minimized, lightweight install) HWMonitor (no alerts, CNET: "[free version is] simple but effective") from CPUID CPUCool (not free: 21-day trialware, then $18) SpeedFan from Almico (too geeky, detail overload; CNET: "most users won't be able to make head or tail of the data this utility provides") Motherboard Monitor (CNET: not recommended, requires expert knowledge of your mobo, dangerous) Intel Thermal Analysis Tool (only does cores, not video card? has logging) Useful discussions I found: hardwarecanucks.com , superuser.com 1, 2 , forums.techarena.in (Update: I downloaded Real Temp 3.60 and it meets all my needs, the customizable alert temperature is great. Open Hardware Monitor seems to be the other one that mostly meets my needs, except no alerts; but it is portable. I tried SpeedFan but the interface is very cluttered, too much unnecessary detail (needs a Basic/Advanced mode and a revamp of the interface.) The answer to my underlying issue is nVidia Geforce LE 7500 video card which runs very hot.)

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Generating alerts from ossec ( server- agent ) model

    - by batman
    I'm very new to OSSEC. I use a server-agent model. I wish to generate alert for the following actions ( in agent side ): 1) Sample Alert for delation of logs I added the rules for these in agent's ossec.conf using <localfile> tags. Like this : <localfile> <log_format>syslog</log_format> <location>/var/log/syslog</location> </localfile> In my server's ossec.conf. I added the following : <global> <email_notification>yes</email_notification> <email_to>xxxx@xxxxxx</email_to> <smtp_server>smtp.gmail.com</smtp_server> <email_from>xxxx@xxx</email_from> </global> And I restarted my server. Now I tried to delete the agents syslog file using rm syslog. But no alerts has been triggered. Where I'm making the mistake?

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Apache inflate application/ with mod_filter

    - by BGT
    I need to prevent pdf objects from being gzipped. Really, this only needs to take place if the request is from the Mozilla browser (but since I can't get something as seemingly simple as no-gzip for application/pdf, I figure it's wiser to start there). From looking at the apache documentation on mod_filter, I've got the following: <Location /> FilterDeclare gzipDeflate CONTENT_SET FilterDeclare gzipInflate CONTENT_SET FilterProvider gzipDeflate deflate req=User-Agent $Mozilla/ FilterProvider gzipInflate inflate resp=Content-Type $application/ FilterChain +gzipDeflate +gzipInflate </Location> From my testing, the gzipDeflate filter is doing its job and all the pages without the Content-Type starting with application are being gzipped. But, the gzipInflate doesn't seem to be working at all. I've inspected the response in Firebug and verified that the Content-Type being sent down is application/pdf. I'll go ahead and ask a potentially stupid question though: The response's Content-Type header in its entirety read "application/pdf; charset=Windows-1252". Does that make any sort of difference or is $application/ presumably enough to catch that? Any help is greatly appreciated. One other point, the URL that returns the pdf object does not have the .pdf extension. The pdf itself is stored in an Oracle database as a blob and appended to the page when appropriate (all the urls in the system use the same baseline). This was part of an original inquiry by a helpful member at stackoverflow who pointed me towards mod_filter and suggested I post the question here.

    Read the article

  • Terminate child processes on ctrl-c

    - by jackweirdy
    In tiny core linux, I have the following script: #!/bin/sh # ~/.X.d/freerdp.sh rdp(){ while true do xfreerdp -f [IP Address] done } rdp & It's pretty simple; when X starts up and checks the .X.d directory (as is the case in tiny core) it finds and executes this script. The script starts up freerdp and keeps a connection open to the server by restarting it whenever it closes. As you can see from the rdp & line, the function is run in the background to allow X to continue its startup routine. The problem is that whenever I cancel X with a Ctrl-Alt-Backspace the rdp process doesn't die. I'm looking for a way to kill the process as soon as X finishes, either through: A) a script, executed on X closing, which kills the process or B) by modifying the script to check the return value of the xfreerdp command. NB - if the solution does check the return value, it must only end if the command fails to open the X display. For that reason, if you could point me to a reference for xfreerdp return values I'd be grateful.

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • Nginx - Serve blank page on "Bad Gateway" error

    - by TheLittleCheeseburger
    Hello all. I want to use Nginx as a simple reverse proxy, but if the server behind Nginx is down I just was to display a blank page. For some reason this configuration isn't displaying a blank page on error 502 and I can't figure out why. Thanks for your help! user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; # multi_accept on; } http { keepalive_timeout 65; proxy_read_timeout 200; upstream tornado { server 127.0.0.1:8001; } server { listen 80; server_name www.something.com; location / { error_page 502 = @blank; proxy_pass http://tornado; } location @blank { index index.html; root /web/blank; } } }

    Read the article

  • What are these CPU cache settings? Snoop Filter, ACL prefetch, HW prefetch

    - by eater
    I was in my BIOS setup turning on VT-x support today and saw these other settings. A little googling indicates that they each seem to turn on some sort of optimization to do with the CPU's L2 cache. They were all turned off by default. The processor in question is an Intel Xeon quad-core 3.4GHz (X5492). My OS is Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux. I have 4GB of RAM if that matters. Here's what the BIOS manufacturer has to say: Snoop Filter Enabling the snoop filter typically improves performance by reducing snoop traffic on the frontside bus in dual processor configurations. Well I like the sound of improved performance. Why would the BIOS have this off by default? Or by dual processor do they not mean multi-core? Regardless, is there a downside if this is on? ACL Prefetch When enabled, the Adjacent Cache Line Prefetcher fetches both cache lines that comprise a cache line pair when it determines required data is not currently in its cache. When disabled, the processor will only fetch the cache line required by the processor. HW Prefetch Fetches an extra line of data into L2 from external memory. Both of these sound like optimizations that have some drawbacks. What are the reasons to turn them on? What are the reasons to leave them off. Why is the default off?

    Read the article

  • AJP Connector Apache-Tomcat with php and java application

    - by Safari
    I have a question about proxy and ajp module. On my machine I have a Apache web server and a Tomcat servlet container. On Tomcat is running a my java webapplication. On Apache I have some services and I can call these in this way: http://myhos/service1 http://myhos/service2 http://myhos/service3 I would configurate a ajp connector to call my tomcat webapplication from Apache. I would somethin as http://myhost to call the Tomcat webapp. So, I configurated my apache in this way..and I have what I wanted: I can use http://myHost to visualize the Tomcat webApp by Apache. <VirtualHost *:80> ProxyRequests off ProxyPreserveHost On ServerAlias myserveralias ErrorLog logs/error.log CustomLog logs/access.log common <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /server-status ! ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID nofailover=Off maxattempts=1 <Proxy balancer://mycluster> BalancerMember ajp://myIp:8009 min=10 max=100 route=portale loadfactor=1 ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from localhost </Location> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </VirtualHost> But, now I can't use the apache services: If I use http://myhos/service1 I have an error because apache try to search service1 on my tomcatWebApp. Is there a way to fix it?

    Read the article

  • Malicious content on server - next steps advice [closed]

    - by Under435
    Possible Duplicate: My server's been hacked EMERGENCY I just got an e-mail from my hosting company that they got a report of malicious content being hosted on my vps. I was unaware of this and started looking into it. I discovered a file called /var/www/mysite.com/osc.htm. Soon after I discovered some weird php files wp-includes.php and ndlist.php both recognized as being PHP/WebShell.A.1 virus. I removed all these files but I'm unsure of what to do next. Can anyone help me analyze the output below of sudo netstat -A inet -p -e and give advice on what's best to do next. Thanks very much in advance Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 localhost.localdo:mysql localhost.localdo:37495 TIME_WAIT root 0 - tcp 0 1 mysite.com:50524 xnacreators.net:smtp SYN_SENT Debian-exim 69746 25848/exim4 tcp 0 0 mysite.com:www tha165.thehealtha:37065 TIME_WAIT root 0 - tcp 0 0 localhost.localdo:37494 localhost.localdo:mysql TIME_WAIT root 0 - udp 0 0 mysite.com:59447 merlin.ensma.fr:ntp ESTABLISHED ntpd 3769 2522/ntpd udp 0 0 mysite.com:36432 beast.syus.org:ntp ESTABLISHED ntpd 4357 2523/ntpd udp 0 0 mysite.com:48212 formularfetischiste:ntp ESTABLISHED ntpd 3768 2522/ntpd udp 0 0 mysite.com:46690 formularfetischiste:ntp ESTABLISHED ntpd 4354 2523/ntpd udp 0 0 mysite.com:35009 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 4356 2523/ntpd udp 0 0 mysite.com:58702 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 3770 2522/ntpd udp 0 0 mysite.com:49583 merlin.ensma.fr:ntp ESTABLISHED ntpd 4355 2523/ntpd udp 0 0 mysite.com:56290 beast.syus.org:ntp ESTABLISHED ntpd 3771 2522/ntpd

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • New build won't boot: some fans turning, no beep or video

    - by Dave
    When my new system is powered up, the case fan and power supply fans turn fine. The CPU fan twitches, but never gets going. Although I've heard that with AMDs and Gigabyte motherboards that is not necessary a problem. Hard drive is spinning. However, there is absolutely no indication that anything else is happening. The motherboard, as far as I can tell, does not have an internal speaker, but I harvested one from another machine and plugged it in and still no beeps at all. The monitor screen stays black, on both the integrated VGA and DVI. This is a brand new build, and has never successfully booted. My parts are: AMD Athlon II X2 245 Regor 2.9GHz Socket AM3 65W Dual-Core Processor Model ADX245OCGQBOX - includes CPU cooler) GIGABYTE GA-MA785GPMT-UD2H AM3 AMD 785G HDMI Micro ATX AMD Motherboard - Retail G.SKILL Ripjaws Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model F3-10666CL8D-4GBRM - Retail CORSAIR CMPSU-400CX 400W ATX12V V2.2 80 PLUS Certified Compatible with Core i7 Power Supply - Retail SAMSUNG Spinpoint F3 HD502HJ 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive COOLER MASTER Elite 341 RC-341C-KKN1-GP Black Steel MicroATX Mid Tower Computer Case - Retail I also have a DVD burner, but it acts the same whether that is plugged in or not. I'm using the on board video. What I've tried so far: I've switched power supplies, with no difference. I've tried different monitors (of which all are working on other machines) with no difference. I have tried putting it one memory module at a time, with no difference. I have tried the absolute minimum I can think of (power supply into motherboard, power button ONLY plugged into front panel, CPU fan plugged in), with no difference. I appreciate any ideas anyone might have. Do I need to RMA the motherboard? This is my first build, so there might be something obvious. I was very careful in assembly with static; I'm confident nothing was zapped during assembly.

    Read the article

  • Nginx map module 301 redirecting

    - by Reinier Korth
    I've rebuild my website in Ruby on Rails and now I want to 301 redirect a lot of old urls using Nginx's http://wiki.nginx.org/HttpMapModule For some reason I can't get it to work. It works fine without the rewrite ^ $new permanent; line. Does anyone see what I'm missing? This my nginx.conf: server { server_name example.com; return 301 $scheme://www.example.com$request_uri; } # 301 redirect list map $uri $new { /test123 http://www.example.com/test123; /bla http://www.example.com/bladiebla; } server { server_name www.example.com; rewrite ^ $new permanent; root example/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn-<%= application %>; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >