Search Results

Search found 34668 results on 1387 pages for 'return'.

Page 715/1387 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • Will AJAX cause my site to have a high bounce % and hurt my search ratings?

    - by Cryo
    I'm building an art gallery website that updates its images via AJAX, for those who have javascript enabled, rather than request multiple page loads. I assume this will appear as though my site has a high bounce percentage. I understand that search engines will not be able to index dynamic content, but will such a misinterpreted bounce rate hurt my search engine ratings, even if I have many return visitors?

    Read the article

  • ? and function javascript

    - by Rixius
    is there any way to map the ? key to the function keyword? so that these work: var rFalse = ?() { return false; } (?(){ var str = "i'm in a closure"; }()); window.onload = ?() { alert('window loaded'); } I know that they are attempting to put a shortened function keyword in ecmascript v6, but I'm wondering if it is possible to do it now.

    Read the article

  • In Windows Vista and 7, I can't access the %DEFAULTUSERPROFILE% system variable - it shows as not fo

    - by shifuimam
    If I try to access this system variable from the Run... dialog, Windows tells me the directory doesn't exist. Some system variables, like %SYSTEMROOT% and %USERPROFILE%, do work. Consequently, if I try to use a supposedly nonexistent variable like %DEFAULTUSERPROFILE% or %PROFILESFOLDER% in C#, I get nothing in return. Is there something special I need to do to get access to these variables?

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • Hive metadata permission issue

    - by Chandramohan
    We are getting this error on Hive, while creating a DB / table hive> CREATE TABLE pokes (foo INT, bar STRING); FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. NestedThrowables: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask Hive log : org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601) at org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286) at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:234) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:261) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:196) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:171) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:354) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:306) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:451) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:232) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:197) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:108) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1868) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1878) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:470) ... 15 more Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521) at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300) at org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583) ... 42 more Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1191) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ... 52 more 2011-08-11 18:02:36,964 ERROR ql.Driver (SessionState.java:printError(343)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed.(i've asked this question over on Stack too - but I need to get it fixed so I'm asking here too - any help is GREATLY appreciated) I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') Here are the parameters it's getting before assembling the query (var_dump): array 'id' => string '1' (length=4) 'title' => string 'Sample Title' (length=12) 'summary' => string 'Sample content' (length=14) 'description' => string '<p>'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue'... (length=677) 'keywords' => string 'this,is,a,sample,array' (length=22) 'type_id' => int 1 'categories' => string 'category title' (length=43) The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. I've tried extending the server parameters i.e. http://stackoverflow.com/questions/1964554/unexpected-connection-reset-a-php-or-an-apache-issue So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • Mpd as pppoe server with authorisation by freeradius2

    - by Korjavin Ivan
    I install freeradius2, add to raddb/users: test Cleartext-Password := "test1" Service-Type = Framed-User, Framed-Protocol = PPP, Framed-IP-Address = 10.36.0.2, Framed-IP-Netmask = 255.255.255.0, start radiusd, and check auth: radtest test test1 127.0.0.1 1002 testing123 Sending Access-Request of id 199 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test1" NAS-IP-Address = 127.0.0.1 NAS-Port = 1002 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=199, length=44 Service-Type = Framed-User Framed-Protocol = PPP Framed-IP-Address = 10.36.0.2 Framed-IP-Netmask = 255.255.255.0 Works fine. Next step. Add to mpd.conf: radius: set auth disable internal set auth max-logins 1 CI set auth enable radius-auth set radius timeout 90 set radius retries 2 set radius server 127.0.0.1 testing123 1812 1813 set radius me 127.0.0.1 create link template L pppoe set link action bundle B set link max-children 1000 set link no multilink set link no shortseq set link no pap chap-md5 chap-msv1 chap-msv2 set link enable chap set pppoe acname Internet load radius create link template em1 L set pppoe iface em1 set link enable incoming And trying to connect, auth failed, here is mpd log: mpd: [em1-2] LCP: auth: peer wants nothing, I want CHAP mpd: [em1-2] CHAP: sending CHALLENGE #1 len: 21 mpd: [em1-2] LCP: LayerUp mpd: [em1-2] CHAP: rec'd RESPONSE #1 len: 58 mpd: [em1-2] Name: "test" mpd: [em1-2] AUTH: Trying RADIUS mpd: [em1-2] RADIUS: Authenticating user 'test' mpd: [em1-2] RADIUS: Rec'd RAD_ACCESS_REJECT for user 'test' mpd: [em1-2] AUTH: RADIUS returned: failed mpd: [em1-2] AUTH: ran out of backends mpd: [em1-2] CHAP: Auth return status: failed mpd: [em1-2] CHAP: Reply message: ^AE=691 R=1 mpd: [em1-2] CHAP: sending FAILURE #1 len: 14 mpd: [em1-2] LCP: authorization failed Then i start freeradius as radiusd -fX, and get this log: rad_recv: Access-Request packet from host 127.0.0.1 port 46400, id=223, length=282 NAS-Identifier = "rubin.svyaz-nt.ru" NAS-IP-Address = 127.0.0.1 Message-Authenticator = 0x14d36639bed8074ec2988118125367ea Acct-Session-Id = "815965-em1-2" NAS-Port = 2 NAS-Port-Type = Ethernet Service-Type = Framed-User Framed-Protocol = PPP Calling-Station-Id = "00e05290b3e3 / 00:e0:52:90:b3:e3 / em1" NAS-Port-Id = "em1" Vendor-12341-Attr-12 = 0x656d312d32 Tunnel-Medium-Type:0 = IEEE-802 Tunnel-Client-Endpoint:0 = "00:e0:52:90:b3:e3" User-Name = "test" MS-CHAP-Challenge = 0xbb1e68d5bbc30f228725a133877de83e MS-CHAP2-Response = 0x010088746ae65b68e435e9d045ad6f9569b60000000000000000b56991b4f20704cb6c68e5982eec5e98a7f4b470c109c1b9 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop [mschap] Found MS-CHAP attributes. Setting 'Auth-Type = mschap' ++[mschap] returns ok [eap] No EAP-Message, not doing EAP ++[eap] returns noop [files] users: Matched entry DEFAULT at line 172 ++[files] returns ok Found Auth-Type = MSCHAP # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group MS-CHAP {...} [mschap] No Cleartext-Password configured. Cannot create LM-Password. [mschap] No Cleartext-Password configured. Cannot create NT-Password. [mschap] Creating challenge hash with username: test [mschap] Client is using MS-CHAPv2 for test, we need NT-Password [mschap] FAILED: No NT/LM-Password. Cannot perform authentication. [mschap] FAILED: MS-CHAP2-Response is incorrect ++[mschap] returns reject Failed to authenticate the user. Login incorrect: [test] (from client localhost port 2 cli 00e05290b3e3 / 00:e0:52:90:b3:e3 / em1) Using Post-Auth-Type REJECT # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} -> test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 2 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 2 Sending Access-Reject of id 223 to 127.0.0.1 port 46400 MS-CHAP-Error = "\001E=691 R=1" Why i have error "[mschap] No Cleartext-Password configured. Cannot create LM-Password." ? I define cleartext-password in users. I check raddb/sites-enabled/default authorize { chap mschap eap { ok = return } files } looks ok for me. Whats wrong with mpd/chap/radius ?

    Read the article

  • Google Apps e-mail being rejected from some domains

    - by Paul J. Lucas
    I'm migrating e-mail for my domains to Google Apps' e-mail. Most everything seems to work except e-mail sent to any user at (at least) sonic.net is rejected with a message of the form (where any-address has been substituted for my friend's address): From: Mail Delivery Subsystem <[email protected]> Date: March 11, 2010 10:04:48 AM PST To: [email protected] Subject: Delivery Status Notification (Failure) Delivered-To: [email protected] Received: by 10.229.194.26 with SMTP id dw26cs8717qcb; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr3841599fai.62.1268330688325; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr5119424fai.62; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Mime-Version: 1.0 Return-Path: <> X-Failed-Recipients: [email protected] Message-Id: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.1.1 <[email protected]>... No such user here (state 13). And here are the headers from the message it bounces back: Received: by 10.101.90.7 with SMTP id s7mr2515885anl.176.1267979929490; Sun, 07 Mar 2010 08:38:49 -0800 (PST) Return-Path: <[email protected]> Received: from [10.0.1.203] (adsl-76-201-171-194.dsl.pltn13.sbcglobal.net [76.201.171.194]) by mx.google.com with ESMTPS id 4sm1046550yxd.70.2010.03.07.08.38.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 07 Mar 2010 08:38:49 -0800 (PST) From: "Paul J. Lucas" <[email protected]> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Some fascinating subject Date: Sun, 7 Mar 2010 08:38:46 -0800 References: <[email protected]> To: [email protected] Message-Id: <[email protected]> Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) However, I am able to send mail to a user at sonic.net using my old e-mail account. Also, my company uses Google Apps for e-mail and I can send e-mail to a user at sonic.net from my company. The differences between my personal e-mail and my company's are: My company's domain has no SPF record whereas mine does. My company's domain has an A record whereas mine does not. My SPF record initially was as prescribed by Google here. However, this guy claims Google is wrong and gives a fix. I've tried it both ways with no difference. My SPF record is currently: v=spf1 mx include:aspmx.googlemail.com include:_spf.google.com ~all As for the lack of an A record, you wouldn't think that a mail host would care about that so long as mx records are defined. However, the funny thing is that if you look at the error message, why does Google state that the recipient's domain stated that there is "No such user here" for my address? That makes no sense. Of course there is no user having my address at sonic.net. Also, I assume that I just discovered that I can't send mail to users at sonic.net by accident and that there are probably other domains I can't send e-mail to. So... anybody have any idea what's going on? And how I can get mail to users at sonic.net?

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • GNU/Linux swapping blocks system

    - by Ole Tange
    I have used GNU/Linux on systems from 4 MB RAM to 512 GB RAM. When they start swapping, most of the time you can still log in and kill off the offending process - you just have to be 100-1000 times more patient. On my new 32 GB system that has changed: It blocks when it starts swapping. Sometimes with full disk activity but other times with no disk activity. To examine what might be the issue I have written this program. The idea is: 1 grab 3% of the memory free right now 2 if that caused swap to increase: stop 3 keep the chunk used for 30 seconds by forking off 4 goto 1 - #!/usr/bin/perl sub freekb { my $free = `free|grep buffers/cache`; my @a=split / +/,$free; return $a[3]; } sub swapkb { my $swap = `free|grep Swap:`; my @a=split / +/,$swap; return $a[2]; } my $swap = swapkb(); my $lastswap = $swap; my $free; while($lastswap >= $swap) { print "$swap $free"; $lastswap = $swap; $swap = swapkb(); $free = freekb(); my $used_mem = "x"x(1024 * $free * 0.03); if(not fork()) { sleep 30; exit(); } } print "Swap increased $swap $lastswap\n"; Running the program forever ought to keep the system at the limit of swapping, but only grabbing a minimal amount of swap and do that very slowly (i.e. a few MB at a time at most). If I run: forever free | stdbuf -o0 timestamp > freelog I ought to see swap slowly rising every second. (forever and timestamp from https://github.com/ole-tange/tangetools). But that is not the behaviour I see: I see swap increasing in jumps and that the system is completely blocked during these jumps. Here the system is blocked for 30 seconds with the swap usage increases with 1 GB: secs 169.527 Swap: 18440184 154184 18286000 170.531 Swap: 18440184 154184 18286000 200.630 Swap: 18440184 1134240 17305944 210.259 Swap: 18440184 1076228 17363956 Blocked: 21 secs. Swap increase 2400 MB: 307.773 Swap: 18440184 581324 17858860 308.799 Swap: 18440184 597676 17842508 330.103 Swap: 18440184 2503020 15937164 331.106 Swap: 18440184 2502936 15937248 Blocked: 20 secs. Swap increase 2200 MB: 751.283 Swap: 18440184 885288 17554896 752.286 Swap: 18440184 911676 17528508 772.331 Swap: 18440184 3193532 15246652 773.333 Swap: 18440184 1404540 17035644 Blocked: 37 secs. Swap increase 2400 MB: 904.068 Swap: 18440184 613108 17827076 905.072 Swap: 18440184 610368 17829816 942.424 Swap: 18440184 3014668 15425516 942.610 Swap: 18440184 2073580 16366604 This is bad enough, but what is even worse is that the system sometimes stops responding at all - even if I wait for hours. I have the feeling it is related to the swapping issue, but I cannot tell for sure. My first idea was to tweak /proc/sys/vm/swappiness from 60 to 0 or 100, just to see if that had any effect at all. 0 did not have an effect, but 100 did cause the problem to arise less often. How can I prevent the system from blocking for such a long time? Why does it decide to swapout 1-3 GB when less than 10 MB would suffice?

    Read the article

  • Varnish POST problem "9 FetchError c backend write error: 11" for application/x-www-form-urlencoded content

    - by ompap
    Cutting a longish story short, we have managed to get a more precise error out of Varnishlog. Varnishlog tells us that we are sending a 31 TxRequest - POST 31 TxHeader - Content-Type: application/x-www-form-urlencoded but we are getting 9 FetchError c backend write error: 11 31 BackendClose - [backend name] 9 VCL_call c error 9 VCL_return c deliver 9 Length c 488 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 We still do not know what this is exactly, but apparently Content-Type: application/x-www-form-urlencoded is not getting through as it should. Help still needed, please! Original message below. The title was "Varnish not letting Joomla users to log in - 503 guru meditation error", but I changed it to get more attention to the problem and not to the symptoms. Hello, We have a production site for a local newspaper which is currently behind an Apache reverse proxy, basicly the site on one server and the other being reserved as a reverse proxy only (well, there is more but that has no relevance here). Apache as a reverse proxy works, but could be faster. We want to change the reverse proxy to use Varnish instead of Apache on an Ubuntu 10.4 Server. The Varnish is version 2.10 installed directly from Ubuntu repos. Ubuntu 10.4 uses PHP 5.3.2. For anonymous surfers the site works wonderfully with Varnish. So far we can get very good speed out of Varnish, we just have a few problems with logging in or out. The big one is, that the users cannot log in: they get a Varnish 503 error page every time. The logs do not reveal the cause. It feels as if the request would never leave Varnish. So we are merely guessing - not a strong starting point. We have gone through what has been suggested on various plces on the web. We have increased the timeouts to backend xxx { .host = "xxx.xx"; .port = "http"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } but we seem to get the 503 guru error page much faster than that, as in approx. 5 seconds. We have increased the Varnish headers size to 128 in daemon. In vcl_recv we have if (req.http.Authenticate || req.http.Authorization) { return(pass); } and in vcl_fetch ## auhtentication handling if (req.http.Authenticate || req.http.Authorization) { return(pass); } We do not strip cookies. We have tried to make sure that error pages are not cached. As said above, we cannot see anything in the backend Apache logs, apparently it never gets asked for Joomla user authentication. Varnish does not seem to get much mentioning with connection to Joomla. (We cannot dump Joomla, that selection has been done and we just have to live with what we have been given) Has anyone a working Varnish - Joomla combination? Thanks for reading. Please help. We need some hints - desperately. Any suggestions? ompap

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • Having Hotlink Protectin problem in nginx

    - by Ayaz Malik
    Hello, i am having image hotlink protection problem in my nginx need help. i have a huge issue of my site's images being submited to social networks like stumbleupon with direct link ... xxxxx.jpg which some times get huge traffic and increases cpu usage plus bandwidth usage. what i am trying to do is block direct access to image from other refrers and hotlink protection. Here is the code from my vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } So for hotlink protection i added this code : location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } This is how the current nginx code for this domain looks like but didn't worked: server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } Thank you in advance :) cheers

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • How do i change the BIOS boot splash screen?

    - by YumYumYum
    I have a Dell PC which has very ugly and bad luck looking Alien face on every boot. I want to change it or disable it forever, but in Bios they do not have any options. How can i change this from my linux Fedora or ArchLinux which is running now? Tried following does not work. ( http://www.pixelbeat.org/docs/bios/ ) ./flashrom -r firmware.old #save current flash ROM just in case ./flashrom -wv firmware.new #write and verify new flash ROM image Also tried: $ cat c.c #include <stdio.h> #include <inttypes.h> #include <netinet/in.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #define lengthof(x) (sizeof(x)/sizeof(x[0])) uint16_t checksum(const uint8_t* data, int len) { uint16_t sum = 0; int i; for (i=0; i<len; i++) sum+=*(data+i); return htons(sum); } void usage(void) { fprintf(stderr,"Usage: therm_limit [0,50,53,56,60,63,66,70]\n"); fprintf(stderr,"Report therm limit of terminal in BIOS\n"); fprintf(stderr,"If temp specifed, it is changed if required.\n"); exit(EXIT_FAILURE); } #define CHKSUM_START 51 #define CHKSUM_END 109 #define THERM_OFFSET 67 #define THERM_SHIFT 0 #define THERM_MASK (0x7 << THERM_SHIFT) #define THERM_OFF 0 uint8_t thermal_limits[]={0,50,53,56,60,63,66,70}; #define THERM_MAX (lengthof(thermal_limits)-1) #define DEV_NVRAM "/dev/nvram" #define NVRAM_MAX 114 uint8_t nvram[NVRAM_MAX]; int main(int argc, char* argv[]) { int therm_request = -1; if (argc>2) usage(); if (argc==2) { if (*argv[1]=='-') usage(); therm_request=atoi(argv[1]); int i; for (i=0; i<lengthof(thermal_limits); i++) if (thermal_limits[i]==therm_request) break; if (i==lengthof(thermal_limits)) usage(); else therm_request=i; } int fd_nvram=open(DEV_NVRAM, O_RDWR); if (fd_nvram < 0) { fprintf(stderr,"Error opening %s [%m]\n", DEV_NVRAM); exit(EXIT_FAILURE); } if (read(fd_nvram, nvram, sizeof(nvram))==-1) { fprintf(stderr,"Error reading %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } uint16_t chksum = *(uint16_t*)(nvram+CHKSUM_END); printf("%04X\n",chksum); exit(0); if (chksum == checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START)) { uint8_t therm_byte = *(uint16_t*)(nvram+THERM_OFFSET); uint8_t therm_status=(therm_byte & THERM_MASK) >> THERM_SHIFT; printf("Current thermal limit: %d°C\n", thermal_limits[therm_status]); if ( (therm_status == therm_request) ) therm_request=-1; if (therm_request != -1) { if (therm_status != therm_request) printf("Setting thermal limit to %d°C\n", thermal_limits[therm_request]); uint8_t new_therm_byte = (therm_byte & ~THERM_MASK) | (therm_request << THERM_SHIFT); *(uint8_t*)(nvram+THERM_OFFSET) = new_therm_byte; *(uint16_t*)(nvram+CHKSUM_END) = checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START); (void) lseek(fd_nvram,0,SEEK_SET); if (write(fd_nvram, nvram, sizeof(nvram))!=sizeof(nvram)) { fprintf(stderr,"Error writing %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } } } else { fprintf(stderr,"checksum failed. Aborting\n"); close(fd_nvram); exit(EXIT_FAILURE); } return EXIT_SUCCESS; } $ gcc c.c -o bios # ./bios 16DB

    Read the article

  • How to install pip/easy_install on debian 6 for python3.2

    - by atomAltera
    I'm trying to install pip or setup tools form python 3.2 in debian 6. First case: apt-get install python3-pip...OK python3 easy_install.py webob Searching for webob Reading http://pypi.python.org/simple/webob/ Reading http://webob.org/ Reading http://pythonpaste.org/webob/ Best match: WebOb 1.2.2 Downloading http://pypi.python.org/packages/source/W/WebOb/WebOb-1.2.2.zip#md5=de0f371b46554709ce5b93c088a11cae Processing WebOb-1.2.2.zip Traceback (most recent call last): File "easy_install.py", line 5, in <module> main() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1931, in main with_ei_usage(lambda: File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1912, in with_ei_usage return f() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1935, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 368, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 608, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 638, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 799, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 67, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 154, in unpack_zipfile data = z.read(info.filename) File "/usr/local/lib/python3.2/zipfile.py", line 891, in read with self.open(name, "r", pwd) as fp: File "/usr/local/lib/python3.2/zipfile.py", line 980, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 489, in __init__ self._decompressor = zlib.decompressobj(-15) AttributeError: 'NoneType' object has no attribute 'decompressobj' Second case: from http://pypi.python.org/pypi/distribute#installation-instructions python3 distribute_setup.py Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmpv6iei2 Traceback (most recent call last): File "distribute_setup.py", line 515, in <module> main(sys.argv[1:]) File "distribute_setup.py", line 511, in main _install(tarball, _build_install_args(argv)) File "distribute_setup.py", line 73, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.2/tarfile.py", line 1746, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully Third case: from http://pypi.python.org/pypi/distribute#installation-instructions tar -xzvf distribute-0.6.28.tar.gz cd distribute-0.6.28 python3 setup.py install Before install bootstrap. Scanning installed packages No setuptools distribution found running install running bdist_egg running egg_info writing distribute.egg-info/PKG-INFO writing top-level names to distribute.egg-info/top_level.txt writing dependency_links to distribute.egg-info/dependency_links.txt writing entry points to distribute.egg-info/entry_points.txt reading manifest file 'distribute.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'distribute.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying distribute.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating 'dist/distribute-0.6.28-py3.2.egg' and adding 'build/bdist.linux-x86_64/egg' to it Traceback (most recent call last): File "setup.py", line 220, in <module> scripts = scripts, File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/install.py", line 73, in run self.do_egg_install() File "build/src/setuptools/command/install.py", line 93, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.2/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/bdist_egg.py", line 241, in run dry_run=self.dry_run, mode=self.gen_header()) File "build/src/setuptools/command/bdist_egg.py", line 542, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python3.2/zipfile.py", line 689, in __init__ "Compression requires the (missing) zlib module") RuntimeError: Compression requires the (missing) zlib module zlib1g-dev installed Help me please

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • NGINX - CORS error affecting only Firefox

    - by wiherek
    this is an issue with Nginx that affects only firefox. I have this config: http://pastebin.com/q6Yeqxv9 upstream connect { server 127.0.0.1:8080; } server { server_name admin.example.com www.admin.example.com; listen 80; return 301 https://admin.example.com$request_uri; } server { listen 80; server_name ankieta.example.com www.ankieta.example.com; add_header Access-Control-Allow-Origin $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; return 301 https://ankieta.example.com$request_uri; } server { server_name admin.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location / { proxy_pass http://connect; } } server { server_name ankieta.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; root /srv/limesurvey; index index.php; add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; client_max_body_size 4M; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ /*.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_param SCRIPT_FILENAME /srv/limesurvey$fastcgi_script_name; # fastcgi_param HTTPS $https; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } } this is basically an AngularJS app and a PHP app (LimeSurvey), served under two different domains by the same webserver (Nginx). AngularJS is in fact served by ConnectJS, which is proxied to by Nginx (ConnectJS listens only on localhost). In Firefox console I get this: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ankieta.example.com/admin/remotecontrol. This can be fixed by moving the resource to the same domain or enabling CORS. which of course is annoying. Other browsers work fine (Chrome, IE). Any suggestions on this?

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • Email from my new vps is marked as spam

    - by Chriswede
    I got a new vps from x10vps (x10hosting) and set up the domain via cloudflare. This is what the email looks like: Delivered-To: [email protected] Received: by 10.64.19.240 with SMTP id i16csp357708iee; Tue, 9 Oct 2012 01:29:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr908846igq.56.1349771387599; Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id v8si25630942ica.46.2012.10.09.01.29.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected] Received: from nk11p03mm-asmtp010.mac.com ([17.158.232.169]:54276) by power.SOURCEAPE.COM with esmtp (Exim 4.80) (envelope-from <[email protected]>) id 1TLVBD-0004Ig-1Y for [email protected]; Tue, 09 Oct 2012 12:28:43 +0400 I then tried to enable SPF and DKIM and got following massage In order to ensure that SPF or DKIM takes effect, you must confirm that this server is an authoritative nameserver for chvw.de. If you need help, contact your hosting provider. Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for chvw.de. [?] and the email header now looks like this: Delivered-To: [email protected] Received: by 10.50.183.227 with SMTP id ep3csp14506igc; Tue, 9 Oct 2012 01:55:23 -0700 (PDT) Received: by 10.50.40.133 with SMTP id x5mr992934igk.32.1349772923717; Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id ng8si25688859icb.42.2012.10.09.01.55.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected]; dkim=neutral (bad format) [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=chvw.de; s=default; h=Message-ID:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iugsx3Lx0KnqjR7dj3wyQHnJ9pe/z3ntYEVk80k8rx4=; b=IrYsCtHdoPubXVOvLqxd7sLE/TyQTS5P3OrEg5SSUSKnQQcQ/fWWyBrmsrgkFSsw6jCmmRWMDR09vH5bQRpFPMA57B7pf8QRKhwXOWFBV+GnVUqICsfRjnNPvhx/lNp5; Received: from localhost ([127.0.0.1]:46539 helo=direct.chvw.de) by power.SOURCEAPE.COM with esmtpa (Exim 4.80) (envelope-from <[email protected]>) id 1TLVb0-0004dZ-Kd for [email protected]; Tue, 09 Oct 2012 12:55:22 +0400

    Read the article

  • Trouble in Nginx hotlink protection

    - by Ayaz Malik
    I am trying to implement image hotlink protection problem in nginx and I need help. I have a huge issue of my site's images being submitted to social networks like StumbleUpon with a direct link like http://example.com/xxxxx.jpg Which sometimes gets huge traffic and increases CPU usage and bandwidth usage. I want to block direct access to my images from other referrers and protect them from being hotlinked. Here is the code from my vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } For hotlink protection I added this code location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } This is the current nginx code for this domain, but it didn't work: server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } How can I fix this?

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >