Search Results

Search found 19256 results on 771 pages for 'css tables'.

Page 356/771 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Why does Mysql Xampp restart only when i run the mysqld.exe file manually?

    - by Ranjit Kumar
    I am using mysql-xampp v3.0.2 version. while restarting the mysql server first it show me the running status and after 2or3s it stops running automatically. So as of now i got a temporary solution like going into xampp installation folder Xampp-mysql-bin-running the msqld.exe file. i dont know whether it is the correct solution or is there any alternate solution to be made !! please suggest me errorlog 120629 15:29:59 [Note] Plugin 'FEDERATED' is disabled. 120629 15:29:59 InnoDB: The InnoDB memory heap is disabled 120629 15:29:59 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120629 15:29:59 InnoDB: Compressed tables use zlib 1.2.3 120629 15:29:59 InnoDB: Initializing buffer pool, size = 16.0M 120629 15:29:59 InnoDB: Completed initialization of buffer pool InnoDB: The first specified data file D:\xampp\xampp\mysql\data\ibdata1 did not exist: InnoDB: a new database to be created! 120629 15:29:59 InnoDB: Setting file D:\xampp\xampp\mysql\data\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... 120629 15:29:59 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile0 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile0 size to 5 MB InnoDB: Database physically writes the file full: wait... 120629 15:30:00 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile1 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile1 size to 5 MB InnoDB: Database physically writes the file full: wait... InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: 127 rollback segment(s) active. InnoDB: Creating foreign key constraint system tables InnoDB: Foreign key constraint system tables created 120629 15:30:02 InnoDB: Waiting for the background threads to start

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • No external src ip in log files (my router ip appears instead)

    - by bongo_fury
    I recently retired my workhorse WRT54G router/AP in favor of a Linksys EA2700. Since then, all inbound traffic (bound to an Ubuntu 10.02 box running LAMP)logged to Syslog, Apache's error and access logs, etc. (all behind said router) is getting logged with a src ip of 192.168.1.1, that of the router's internal ip. For example, here is an old entry from apache's access.log: 74.82.68.20 - - [22/Feb/2011:10:14:34 -0600] "GET /assets/css/style.css HTTP/1.1" 304 154 "http://example.com/view.php?event_id=1" "BlackBerry8520/5.0.0.822 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/100" And here is one since switching the router: 192.168.1.1 - - [05/Oct/2012:21:29:25 -0500] "GET /somedir/print.css HTTP/1.1" 200 650 "http://example.com/somedir/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"** That first field is the problem. Each and every entry in every log shows an "external" IP of 192.168.1.1, which isn't very helpful. Any ideas? Much thanks from a n00b!

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • How to recover the plesk database?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here? EDIT: It seems I can't even put ext3 or ext4 on it, they both complain about a corrupted journal. Gheh, guess something is really broken here.

    Read the article

  • How To Investigate/Restore MySQL Permissions? MySQL ERROR 1045 (28000): Access denied for user

    - by Recc
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Debian. mysqld is listening on 3306 supposedly Telnet to 3306 works Also tried binding it specifically yo localhost and then 127.0.0.1 which made no difference However: # netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 78993 /var/run/mysqld/mysqld.sock # mysql -P3306 -ptest ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Things I've tried: dpkg-reconfigure mysql-server-5.1 Doesn't help http://www.debian-administration.org/articles/442 Doesn't help This command (source): UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root'; FLUSH PRIVILEGES; Doesn't help, in fact: Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 So might the user be deleted? Extremely unlikely as all this started after packages update a colleague did and some separate services started screwing around but my colleague said he removed the offenders. Theres more: while # mysqld_safe --skip-grant-tables is running one can access the data tables, only with the valid passwords! So there's users and some authentication takes place hence the 0 rows affected above. Can the privileges tables be damaged somehow and how can I recreate/restore them when my only way of getting a mysql console is to skip them? Can I spare my reinstall of MySQL? Either way I did get a dump of the DBs now that I could get in with the above mode.

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Strange issue ! Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new Server with windows server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separated packages. Issue : On my first installation of my application all works great. When I updated some JS and CSS files then I access to my application again from a PC on local network I get the old JS and CSS versions! But when I access to the same application on local server I got the latest versions of those files! Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I never had this kind of issue! I think that there are some cache but I don't know where! Maybe on Win Server 2008 R2 or on VMWare ! The question is : Why when I access to my application on the server all works fine, but when I access to the same application from a local network I have the old version of JS and CSS files?? Any one can help me please?! Regards.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >