Search Results

Search found 17980 results on 720 pages for 'mod auth mysql'.

Page 293/720 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Server specification recommendation

    - by foo
    To cut the story short, I can't buy an item (server/cpu/motherboard/ram) that costs more than USD 330. However, I can combine them, meaning, I can buy a CPU that costs USD 330 and motherboard that costs USD 330. With this limitation, I can't buy a powerful 1U server which will definitely costs me more USD 330. With that in mind, I was hoping to build a powerful desktop PC which will be used as a database server. However, through my experience, desktop PC doesn't last very long, usually the motherboard will just die by itself after 1 or 2 years. So, what would you guys recommend me to buy with this kind of budget? Every item must be <= USD 330. Will be used as a MySQL server. RAID would be nice. 1TB is pretty big for my data. I do not need external graphic card (onboard would do just fine), mouse, keyboard, monitor. Linux friendly. One ethernet port is good enough. It's important that those hardware is made of components that will last long (at least 3 years or something). The server will be placed in an air conditioned room, but a good ventilation for the server is always preferred. I won't overclock it. Intel processor is preferred. Thanks in advance.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Ruby Gem LoadError mysql2/mysql2 required

    - by Kalli Dalli
    Im trying to setup my rails server on OSX 10.8 but I can't get my rails server to run. - Currently Im using a Zend Server with mysql 5.1. - I also have istalled brew and brew mysql. - And I used: gem install mysql2 -- --srcdir=/usr/local/mysql/include --with-opt-include=/usr/local/mysql/include the server worked already but now, I always get this loadError below. This is what my Gemfile says: ralphs-macbook-pro:admin-mockup zero$ bundle install Using rake (10.0.2) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.7) Using builder (3.0.4) Using activemodel (3.2.7) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.7) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.7) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.7) Using activeresource (3.2.7) Using annotate (2.5.0) Using coffee-script-source (1.4.0) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.7) Using coffee-rails (3.2.2) Using columnize (0.3.6) Using debugger-ruby_core_source (1.1.5) Using debugger-linecache (1.1.2) Using debugger (1.2.2) Using formtastic (2.2.1) Using haml (3.1.7) Using haml-rails (0.3.5) Using hirb (0.7.0) Using hpricot (0.8.6) Using jquery-rails (2.1.4) Using kgio (2.7.4) Using mysql2 (0.3.11) Using php_serialize (1.2) Using polyamorous (0.5.0) Using rabl (0.7.8) Using railroady (1.1.0) Using bundler (1.2.3) Using rails (3.2.7) Using raindrops (0.10.0) Using randumb (0.3.0) Using sass (3.2.3) Using sass-rails (3.2.5) Using squeel (1.0.13) Using uglifier (1.3.0) Using unicorn (4.4.0) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed. And after starting rails s /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `require': cannot load such file -- mysql2/mysql2 (LoadError) from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `block in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler.rb:128:in `require' from /Users/zero/GitHub/admin-mockup/config/application.rb:7:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `tap' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Thx for any help!

    Read the article

  • Unable to edit a database row from JSF

    - by user1924104
    Hi guys i have a data table in JSF which displays all of the contents of my database table, it displays it fine, i also have a delete function that can successfully delete from the database fine and updates the data table fine however when i try to update the database i get the error java.lang.IllegalArgumentException: Cannot convert richard.test.User@129d62a7 of type class richard.test.User to long below is the code that i have been using to delete the rows in the database that is working fine : public void delete(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "DELETE FROM user1 WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row deleted successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } i simply wanted to edit the above code so it would update the records instead of deleting them so i edited it to look like : public void editData(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "UPDATE user1 set name = '"+name+"', email = '"+ email +"', address = '"+address+"' WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row updated successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } and the xhmtl is : <p:dataTable id="dataTable" var="u" value="#{userBean.getUserList()}" paginator="true" rows="10" editable="true" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <p:column> <f:facet name="header"> User ID </f:facet> #{u.userID} </p:column> <p:column> <f:facet name="header"> Name </f:facet> #{u.name} </p:column> <p:column> <f:facet name="header"> Email </f:facet> #{u.email} </p:column> <p:column> <f:facet name="header"> Address </f:facet> #{u.address} </p:column> <p:column> <f:facet name="header"> Created Date </f:facet> #{u.created_date} </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Delete" action="#{user.delete(u.userID)}" /> </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Edit" action="#{user.editData(u)}" /> </p:column> currently when you press the edit button it will only update it with the same values as i haven't yet managed to get the datatable to be editable with the database, i have seen a few examples with an array list where the data table gets its values from but never a database so if you have any advice on this too it would be great thanks

    Read the article

  • rewrite 301 replace domain name with a new domain name

    - by user172697
    Hello , guys I need some help on mod rewrite 301 , to redirect my old website address to the new address , here is my scenario ive www.domain1.com/page1/ want to be redirect to domain2.com/page1/ ive to replace all request goes to domain1 with domain2 and keep the page after .com so watever was after .com should be the same just replace domain1 with domain2 . anyone can help me with this Regards

    Read the article

  • Amazon EC2 php/fedora htaccess mod_rewrite not working

    - by wes
    Amazon EC2 with Fedora, there are 2 instances of the httpd conf, the doc root is automatically set as /home/webuser/helloworld/conf/httpd.conf and the default (which is also there) is /etc/httpd/conf/httpd.conf . Mod rewrite is enabled with both, we are in drupal, the .htaccess is in the folder it needs to be (on Drupal) and it loads the homepage and other static files fine, but it WILL not use htaccess. Any thoughts?

    Read the article

  • deadlocks in the innodb status

    - by shantanuo
    Mysql sever has suddenly become very slow. There are no queries in the slow query log but the innodb status shows something like the following. Does it mean that it is due to innodb deadlock? if Yes, what is the way out? *************************** 1. row *************************** Status: ===================================== 100315 12:55:29 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 5 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 187532, signal count 188120 Mutex spin waits 0, rounds 61908654, OS waits 33052 RW-shared spins 89241, OS waits 41948; RW-excl spins 5857, OS waits 1557 ------------------------ LATEST DETECTED DEADLOCK ------------------------ 100315 12:43:02 *** (1) TRANSACTION: TRANSACTION 0 56996536, ACTIVE 0 sec, process no 5000, OS thread id 3031395216 starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1024, undo log entries 6 MySQL thread id 994, query id 7699751 localhost application Searching rows for update UPDATE QUERY *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `dbII/tbl_ticket_block_master` trx id 0 56996536 lock_mode X locks r ec but not gap waiting Record lock, heap no 141 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837393936; asc 3587996;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 2; hex 6f6b; asc ok;; 4: le n 6; hex 0000035957fe; asc YW ;; 5: len 7; hex 000000401737c0; asc @ 7 ;; 6: SQL NULL; 7: SQL NULL; 8: SQL NULL; 9: len 3; hex 8fb46e; asc n;; 10: SQL NULL; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ;; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9ceebe ; asc K ;; 16: len 1; hex 30; asc 0;; 17: len 4; hex 80006ae8; asc j ;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;; *** (2) TRANSACTION: TRANSACTION 0 56996527, ACTIVE 0 sec, process no 5000, OS thread id 2961476496 fetching rows, thread declared inside InnoDB 237 mysql tables in use 3, locked 3 121 lock struct(s), heap size 11584, undo log entries 16 MySQL thread id 995, query id 7699729 localhost application Searching rows for update UPDATE QUERY *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `DBII/tbl_ticket_block_master` trx id 0 56996527 lock_mode X Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0 0: len 8; hex 73757072656d756d; asc supremum;; Record lock, heap no 2 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837343631; asc 3587461;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 6; hex 497373756564; asc Is sued;; 4: len 6; hex 000003425295; asc BR ;; 5: len 7; hex 8000000464012c; asc d ,;; 6: SQL NULL; 7: len 4; hex 80000058; asc X;; 8: len 1; hex 43; asc C;; 9: len 3; hex 8fb465; asc e;; 10: len 3; hex 8fb46d; asc m;; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ; ; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9b33a2; asc K 3 ;; 16: len 3; hex 756d67; asc umg;; 17: len 4; hex 80006744; asc gD;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;;

    Read the article

  • mod_rewrite and htaccess

    - by chris
    I have set up a few rules based on other questions but now my css breaks I did have the URL / /eshop/cart.php?products_id=bla and everything work fine. but now with my mod rewrite url- /product/product-title/ It loose the base directory. Is there an option to fix this? So i dont have to go back with the full url in all the img src tags and so on?

    Read the article

  • Using altered RewriteCond to check if file exists in multiple locations

    - by futuraprime
    I want to use mod-rewrite to check for a file in two separate locations: at the requested URL, and at the requested URL within the "public" directory. Here's what I have so far: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{DOCUMENT_ROOT}/public/$0 !-f #this line isn't working RewriteRule ^(.*)$ index.php?$1 [L] It can correctly determine if the file is there, but it's not correctly determining if the file is at /public/supplied/file/path/file.extension. How do I test for that?

    Read the article

  • Python module being reloaded for each request with django and mod_wsgi

    - by Vishal
    I have a variable in init of a module which get loaded from the database and takes about 15 seconds. For django development server everything is working fine but looks like with apache2 and mod_wsgi the module is loaded with every request (taking 15 seconds). Any idea about this behavior? Update: I have enabled daemon mode in mod wsgi, looks like its not reloading the modules now! needs more testing and I will update.

    Read the article

  • Make index.cgi redirect to Apache webserver document root

    - by Casey
    I'm trying to expose a CGI file as my document root and web server. I do not want to expose the fact that the server is running a CGI script. How can I map a URL http://host/index.cgi/ back to http://host/ in Apache2? I'm guessing it involves mod-rewrite, but I haven't finished grokking all the docs yet. The following configuration is working, but I'm guessing there is a more complete solution: RewriteEngine ON Redirect /index.cgi/ /

    Read the article

  • sql UPDATE, a calculation is used multiple times, can it just be calculated once?

    - by Zachery Delafosse
    UPDATE `play` SET `counter1` = `counter1` + LEAST(`maxchange`, FLOOR(`x` / `y`) ), `counter2` = `counter2` - LEAST(`maxchange`, FLOOR(`x` / `y`) ), `x` = MOD(`x`, `y`) WHERE `x` `y` AND `maxchange` 0 As you can see, " LEAST(`maxchange`, FLOOR(`x` / `y`) ) " is used multiple times, but it should always have the same value. Is there a way to optimize this, to only calculate once? I'm coding this in PHP, for the record.

    Read the article

  • How do I add a custom mod_rewrite rule to a Drupal site on module install?

    - by DKinzer
    I have a Drupal-6 module that needs the following mod-rewrite rule added to the .htaccess file of the site's root directory in order to work. RewriteRule ^blog/([0-9]{4}/[0-9]{2}/[0-9]{2}/.*)$ wordpress/$1 How can I add this line pro-grammatically when the module is first installed? Can I use custom_url_rewrite_inbound(&$result, $path, $path_language) to do this? If so, could you please show me an example? Thanks, D

    Read the article

  • .htaccess mod_rewrite URL query

    - by 1001001
    I was hoping someone could help me out. I'm building a CRM application and need help modifying the .htaccess file to clean up the URLs. I've read every post regarding .htaccess and mod_rewrite and I've even tried using http://www.generateit.net/mod-rewrite/ to obtain the results with no success. Here is what I am attempting to do. Let's call the base URL www.domain.com We are using php with a mysql back-end and some jQuery and javascript In that "root" folder is my .htaccess file. I'm not sure if I need a .htaccess file in each subdirectory or if one in the root is enough. We have several actual directories of files including "crm", "sales", "finance", etc. First off we want to strip off all the ".php" extensions which I am able to do myself thanks to these posts. However, the querying of the company and contact IDs are where I am stuck. Right now if I load www.domain.com/crm/companies.php it displays all the companies in a list. If I click on one of the companies it uses javascript to call a "goto_company(x)" jQuery script that writes a form and submit that form based on the ID (x) of the company. This works fine and keeps the links clean as all the end user sees is www.domain.com/crm/company.php. However you can't navigate directly to a company. So we added a few lines in PHP to see if the POST is null and try a GET instead allowing us to do www.domain.com/crm/company.php?companyID=40 which displays company #40 out of the database. I need to rewrite this link, and all other associated links to www.domain.com/crm/company/40 I've tried everything and nothing seems to work. Keep in mind that I need to do this for "contacts" and also on the sales portion of the app will need to do something for "deals". To summarize here's what I am looking to do: Change www.domain.com/crm/dash.php to www.domain.com/crm/dash Change www.domain.com/crm/company.php?companyID=40 to www.domain.com/crm/company/40 Change www.domain.com/crm/contact.php?contactID=27 to www.domain.com/crm/contact/27 Change www.domain.com/sales/dash.php to www.domain.com/sales/dash Change www.domain.com/sales/deal.php?dealID=6 to www.domain.com/sales/deal/6 (40, 27, and 6 are just arbitrary numbers as examples) Just for reference, when I used the generateit.net/mod-rewrite site using www.domain.com/crm/company.php?companyID=40 as an example, here is what it told me to put in my .htaccess file: Options +FollowSymLinks RewriteEngine On RewriteRule ^crm/company/([^/]*)$ /crm/company.php?companyID=$1 [L] Needless to say that didn't work.

    Read the article

  • how do I use mod_rewrite for just one file?

    - by Angela
    I want to use mod-rewrite for just one file instance: www.domain.com/contact to pull from www.domain.com/contact.php I used a rewrite rule for all files that look like a directory to do this initially but it messed up some diretory redirects I created so in the short-term, I'd rather just do it for a specific file. Thanks.

    Read the article

  • Apache : Illegal override option FileInfo

    - by Kave
    I have installed a new Ubuntu 12.04 Server and setup Apache and MySQL. I am just trying to replicate what I have in my current server and came across one single problem. - FileInfo Within these two files below: /etc/apache2/sites-available/default-ssl /etc/apache2/sites-available/default I need to add some overrides for the apache server. Original: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> New: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo, Indexes Order allow,deny allow from all </Directory> I have installed the following mods for Apache: sudo apt-get install lamp-server^ -y sudo apt-get install apache2.2-common apache2-utils openssl openssl-blacklist openssl-blacklist-extra -y sudo apt-get install curl libcurl3 libcurl3-dev php5-curl -y sudo apt-get install php5-tidy -y sudo apt-get install php5-gd -y sudo apt-get install php-apc -y sudo apt-get install memcached -y sudo apt-get install php5-memcache -y sudo a2enmod ssl sudo a2enmod rewrite sudo a2enmod headers sudo a2enmod expires sudo a2enmod php5 So When I do a restart with AllowOverride None, its all ok. sudo /etc/init.d/apache2 restart * Restarting web server apache2 ... waiting [OK] But as soon as I change the AllowOverride to FileInfo, Indexes Syntax error on line 11 of /etc/apache2/sites-enabled/000-default: Illegal override option FileInfo, Action 'configtest' failed. The Apache error log may have more information. ...fail! I can't see anything unusual in the error.log [Wed Jun 06 08:23:51 2012] [notice] caught SIGTERM, shutting down [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.1 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/1.0.1 configured -- resuming normal operations I get that warning because its a test server, nonetheless I get the same warning with AllowOverride None and yet it restarts the Apache server correctly. Therefore this warning should be harmless. Have I missed something? Thanks,

    Read the article

  • Mpd as pppoe server with authorisation by freeradius2

    - by Korjavin Ivan
    I install freeradius2, add to raddb/users: test Cleartext-Password := "test1" Service-Type = Framed-User, Framed-Protocol = PPP, Framed-IP-Address = 10.36.0.2, Framed-IP-Netmask = 255.255.255.0, start radiusd, and check auth: radtest test test1 127.0.0.1 1002 testing123 Sending Access-Request of id 199 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test1" NAS-IP-Address = 127.0.0.1 NAS-Port = 1002 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=199, length=44 Service-Type = Framed-User Framed-Protocol = PPP Framed-IP-Address = 10.36.0.2 Framed-IP-Netmask = 255.255.255.0 Works fine. Next step. Add to mpd.conf: radius: set auth disable internal set auth max-logins 1 CI set auth enable radius-auth set radius timeout 90 set radius retries 2 set radius server 127.0.0.1 testing123 1812 1813 set radius me 127.0.0.1 create link template L pppoe set link action bundle B set link max-children 1000 set link no multilink set link no shortseq set link no pap chap-md5 chap-msv1 chap-msv2 set link enable chap set pppoe acname Internet load radius create link template em1 L set pppoe iface em1 set link enable incoming And trying to connect, auth failed, here is mpd log: mpd: [em1-2] LCP: auth: peer wants nothing, I want CHAP mpd: [em1-2] CHAP: sending CHALLENGE #1 len: 21 mpd: [em1-2] LCP: LayerUp mpd: [em1-2] CHAP: rec'd RESPONSE #1 len: 58 mpd: [em1-2] Name: "test" mpd: [em1-2] AUTH: Trying RADIUS mpd: [em1-2] RADIUS: Authenticating user 'test' mpd: [em1-2] RADIUS: Rec'd RAD_ACCESS_REJECT for user 'test' mpd: [em1-2] AUTH: RADIUS returned: failed mpd: [em1-2] AUTH: ran out of backends mpd: [em1-2] CHAP: Auth return status: failed mpd: [em1-2] CHAP: Reply message: ^AE=691 R=1 mpd: [em1-2] CHAP: sending FAILURE #1 len: 14 mpd: [em1-2] LCP: authorization failed Then i start freeradius as radiusd -fX, and get this log: rad_recv: Access-Request packet from host 127.0.0.1 port 46400, id=223, length=282 NAS-Identifier = "rubin.svyaz-nt.ru" NAS-IP-Address = 127.0.0.1 Message-Authenticator = 0x14d36639bed8074ec2988118125367ea Acct-Session-Id = "815965-em1-2" NAS-Port = 2 NAS-Port-Type = Ethernet Service-Type = Framed-User Framed-Protocol = PPP Calling-Station-Id = "00e05290b3e3 / 00:e0:52:90:b3:e3 / em1" NAS-Port-Id = "em1" Vendor-12341-Attr-12 = 0x656d312d32 Tunnel-Medium-Type:0 = IEEE-802 Tunnel-Client-Endpoint:0 = "00:e0:52:90:b3:e3" User-Name = "test" MS-CHAP-Challenge = 0xbb1e68d5bbc30f228725a133877de83e MS-CHAP2-Response = 0x010088746ae65b68e435e9d045ad6f9569b60000000000000000b56991b4f20704cb6c68e5982eec5e98a7f4b470c109c1b9 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop [mschap] Found MS-CHAP attributes. Setting 'Auth-Type = mschap' ++[mschap] returns ok [eap] No EAP-Message, not doing EAP ++[eap] returns noop [files] users: Matched entry DEFAULT at line 172 ++[files] returns ok Found Auth-Type = MSCHAP # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group MS-CHAP {...} [mschap] No Cleartext-Password configured. Cannot create LM-Password. [mschap] No Cleartext-Password configured. Cannot create NT-Password. [mschap] Creating challenge hash with username: test [mschap] Client is using MS-CHAPv2 for test, we need NT-Password [mschap] FAILED: No NT/LM-Password. Cannot perform authentication. [mschap] FAILED: MS-CHAP2-Response is incorrect ++[mschap] returns reject Failed to authenticate the user. Login incorrect: [test] (from client localhost port 2 cli 00e05290b3e3 / 00:e0:52:90:b3:e3 / em1) Using Post-Auth-Type REJECT # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} -> test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 2 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 2 Sending Access-Reject of id 223 to 127.0.0.1 port 46400 MS-CHAP-Error = "\001E=691 R=1" Why i have error "[mschap] No Cleartext-Password configured. Cannot create LM-Password." ? I define cleartext-password in users. I check raddb/sites-enabled/default authorize { chap mschap eap { ok = return } files } looks ok for me. Whats wrong with mpd/chap/radius ?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >