Search Results

Search found 34526 results on 1382 pages for 'html ordered lists'.

Page 550/1382 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • IIS ASP Redirect Removal

    - by Kim L
    We have a website that is setup on IIS 7 and are trying to replace it with a new site, but need a redirect that is in place removed. The old site used a custom file as the homepage (WN-main.asp). We removed all the old site files, including web.config, and placed them in a subdirectory for safe keeping. The new site no longer uses ASP, and we'd like to use a regular index.html as the default. However, when we go to the website, it keeps trying to redirect our .com to .com/WN-main.asp -- and that gives us a 404 Error in the Application for "Default Web Site" because we removed that page. In the IIS "Default Document" settings we have index.html at the top, and WN-main.asp is nowhere to be found in the list (it never was there). We've also removed the web.config file from the root directory, and put the entire old website in a subdirectory. As well as restarted IIS. We're assuming that the redirect is setup somewhere in IIS because if I navigate to .com/index.html which is our new site, it works. Our problem is that oursite.com redirects to oursite.com/WN-main.asp. Grr. If you go to www.worzalla.com you can see how it redirects to the WN-main.asp page right now as the homepage. Any ideas where this redirect could have been setup so we can remove it? Thanks!

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • Mysql - innoDb - is in the future! Current system log sequence number

    - by Ward Loockx
    I have this error in my syslog. I restored an older dump to solve this problem, after some reading. Now The errors in syslog are far less than before. But still a few times an hour I get Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96637 log sequence number 7 1223357717 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96638 log sequence number 8 150027924 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96639 log sequence number 7 4208567151 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Anybody that knows how I can track what database is causing this issue? And how to fix?

    Read the article

  • Relation between server_name in nginx sites-available, /etc/hosts file and A-records

    - by user2818584
    I have the following two server-blocks in my config-file in sites-available: server { listen 80; server_name www.mydomain.be; root /usr/share/nginx/html; index index.html index.htm; location / { try_files $uri $uri/ =404; } } server { listen 80; server_name sub.mydomain.be; root /usr/share/nginx/sub; index index.html index.htm; location / { try_files $uri $uri/ =404; } } I also created an A-record for both www.domain.be and sub.domain.be with the IP of my server as value. Yet, when I try to reload my nginx configuration with service nginx reload it fails. When I remove the second server-block, it reloads as expected. I know this topic is popular, and that there are loads of such [nginx][subdomain] questions here, but none of them seems to discuss explicitly how the following three things hang together: virtual hosts or server blocks in nginx (est. server_name matching) the effect of A-records on how nginx processes requests the need to add hosts to /etc/hosts Right now I have the impression that a lack of knowledge of this bigger picture, rather than specific knowledge of nginx configuration prevents me from making this work.

    Read the article

  • Facing application redirection issue on nginx+tomcat

    - by Sunny Thakur
    I am facing a strange issue on application which is deployed on tomcat and nginx is using in front of tomcat to access the application from browser. The issue is, i deployed the application on tomcat and now setup the virtual host on nginx under conf.d directory [File i created is virtual.conf] and below is the content i am using for the same. server { listen 81; server_name domain.com; error_log /var/log/nginx/domain-admin-error.log; location / { proxy_pass http://localhost:100; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Now the issue is this when i am using rewrite ^(.*) http://$server_name$1 permanent; in server section and access the URL then this redirects to https://domain.com and i am able to log in to app and able to access the links also [I am not using ssl redirection in this host file and i don't know why this is happening] Now when i removed this from server section then i am able to access the application from :81 and able to logged into the application but when i click on any link in app this redirect me to the login page. I am not getting any logs in application logs as well as tomcat logs. Please help on this if this is a redirection issue of nginx. Thanks, Sunny

    Read the article

  • NGINX rewrite for vanity URLs when file doesn't exist (try_files and rewrite together)

    - by user1721724
    I'm trying to get vanity URLs on my server. If the file path from the URL doesn't exist, I want to rewrite the URL to profile.php, but if my users have periods in their usernames, their vanity URL doesn't work. Here is my conf block. server { listen 80; server_name www.example.com; rewrite ^/([a-zA-Z0-9-_]+)$ /profile.php?url=$1 last; root /var/www/html/example.com; error_page 404 = /404.php; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } location ~ \.php$ { fastcgi_pass example_fast_cgi; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/example.com$fastcgi_script_name; include fastcgi_params; } location / { index index.php index.html index.htm; } location ~ /\.ht { deny all; } location /404.php { internal; return 404; } } Any help would be appreciated. Thanks!

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Nginx configuration question

    - by Pockata
    Hey guys, i'm trying to make the autoindex feature only run for my ip address with this code: server{ ... autoindex off; ... if ($remote_addr ~ ..*.*) { autoindex on; } ... } But it doesn't work. It gives my a 403 :/ Can someone help me :) Btw, i'm using Debian Lenny and Nginx 0.6 :) EDIT: Here's my full configuration: server { listen 80; server_name site.com; server_name_in_redirect off; client_max_body_size 4M; server_tokens off; # log_subrequest on; autoindex off; # expires max; error_page 500 502 503 504 /var/www/nginx-default/50x.html; # error_page 404 /404.html; set $myhome /bla/bla; set $myroot $myhome/public; set $mysubd $myhome/subdomains; log_format new_log '$remote_addr - $remote_user [$time_local] $request ' '"$status" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Star nginx :@ access_log /bla/bla/logs/access.log new_log; error_log /bla/bla/logs/error.log; if ($remote_addr ~ 94.156.58.138) { autoindex on; } # Subdomains if ($host ~* (.*)\.site\.org$) { set $myroot $mysubd/$1; } # Static files # location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { # access_log off; # expires 30d; # } location / { root $myroot; index index.php index.html index.htm; } # PHP location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $myroot$fastcgi_script_name; include fastcgi_params; } # .Htaccess location ~ /\.ht { deny all; } } I forgot to mention that when i add the code to remove static files from my access log, the static files cannot be accessed. I don't know if it's relevant :)

    Read the article

  • Advantages of a 130W vs 90W ac adapter?

    - by David
    I purchased a Lenovo T420S laptop with a 90W/20V AC adapter P/N 42t4426 through my IT department. Subsequently, I ordered a replacement adapter and received a 135W/20V AC adapter P/N 45N0054. The 135W version is about twice as big and heavy as the 90W version. Is there an advantage to the 135W version like faster battery recharging? Are there any negative effects (other than weight), like reduced battery life?

    Read the article

  • How to install Ubuntu, Windows XP and Windows 7 from scratch as triple-boot system

    - by simon
    I'm currently running Windows XP, but have ordered Windows 7. I want to keep Windows XP on a separate partition, and install Ubuntu as well. In which order should I install the OSs, and is there anything differing from an ordinary single-system install I should keep in mind? For example, does the order of partition make any difference? If I want to have the system drive as "C:" drive in both Win XP and Win 7, what should I do?

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Raid0 setup - What should 'my computer' say?

    - by superexsl
    Hey, I'm not a hardware person, so maybe someone here could help me. I ordered a PC from Dell that has "Serial ATA Raid 0 "Stripe"(7200RPM)Dual HDD" (2x500gb). However, I've just noticed that there's only one HD of 1TB (which is the default option when ordering). Should I be seeing two HDDs in "My Computer" or does the Raid0 setup simply improve performance rather than have (and display) two individual HDDs? How can I check if my computer does have a 'raid0' setup? Thanks

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Permissions changed after Mail.app and associated data crash

    - by Olivier
    Hi there, I experienced a major Mail.app crash on Snow Leopard couple of days ago. It took me hours to be able to make the folder structure usable again by Mail. I changed permissions back to 755 for all subfolders starting from and including ~/Library/Mail Mail now works again but settings such as folder order in the left side bar and mail ordered by date in some folders don't persist anymore. Any idea? Thxs for help

    Read the article

  • password protect a VPS account?

    - by Camran
    I ordered a VPS today, and am about to upload my website. It uses java, mysql, php etc... However, I need to password protect the site at first... I use Ubuntu 9.10 and have installed LAMP just now. How can I easiest do this? Thanks PS: Have problem with the serverfault website, thats why I am posting this here. sorry.

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • How do I transfer a Windows 7 installation to a new HDD?

    - by Nidonocu
    I've just got all the parts I need to build a new PC but the SATAII hard drive I ordered did not come and I've received an IDE drive instead. While I wait to get the correct drive type and send the new IDE drive back, is it possible to install Windows 7 on to an existing IDE drive that I have and then transfer the contents of that drive over to the new SATA disk when it arrives without Windows having any issues?

    Read the article

  • How to install Ubuntu, Windows XP and Windows 7 from scratch as triple-boot system

    - by simon
    I'm currently running Windows XP, but have ordered Windows 7. I want to keep Windows XP on a separate partition, and install Ubuntu as well. In which order should I install the OSs, and is there anything differing from an ordinary single-system install I should keep in mind? For example, does the order of partition make any difference? If I want to have the system drive as "C:" drive in both Win XP and Win 7, what should I do?

    Read the article

  • FastCGI on lighttpd no data received

    - by Michael Sh
    I have a simple FastCGI script: public static void main (String args[]) { int count = 0; while(new FCGIInterface().FCGIaccept()>= 0) { count ++; System.out.println("Content-type: text/html\n\n"); System.out.println("<html>"); System.out.println( "<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>"); System.out.println("<body>"); System.out.println("<H3>FastCGI Hello Java stdio</H3>"); System.out.println("request number " + count + " running on host " + System.getProperty("SERVER_NAME")); System.out.println("</body>"); System.out.println("</html>"); } } Set up with lighttpd as: server.modules += ( "mod_fastcgi" ) fastcgi.debug = 1 fastcgi.server = ( "/cgi" => ( "fastcgi" => ("port" => 8888, "host" => "127.0.0.1", "bin-path" => "/var/www/tiny.fcgi", "min-procs" => 1, "max-procs" => 1, "check-local" => "disable" )) ) In the log: 2012-11-24 04:35:04: (mod_fastcgi.c.1367) --- fastcgi spawning local proc: /var/www/tiny.fcgi port: 54321 socket max-procs: 1 2012-11-24 04:35:04: (mod_fastcgi.c.1391) --- fastcgi spawning port: 54321 socket current: 0 / 1 2012-11-24 04:35:39: (mod_fastcgi.c.3061) got proc: pid: 0 socket: tcp:127.0.0.1:54321 load: 1 The problem is that there is no data being sent from the server to browser. Am I missing something here?

    Read the article

  • Problem installing build-essential and upgrading g++ on Ubuntu 8.04

    - by ehsanul
    I'm having some trouble with dependencies it seems, but myself don't really know how to resolve the issue. Here's the output: ~:) sudo apt-get install build-essential Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential: Depends: g++ (>= 4:4.3.1) but 4:4.2.3-1ubuntu6 is to be installed E: Broken packages ~:) sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: g++-4.3 (>= 4.3.1-1) but it is not going to be installed Depends: gcc-4.3 (>= 4.3.1-1) but it is not installable E: Broken packages ~:) Edit: I just tried aptitude instead of apt-get, as suggested. Doesn't work, had other problems: ~:) sudo aptitude install build-essential [sudo] password for ehsanul: Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Building tag database... Done The following packages are BROKEN: g++ g++-4.3 libstdc++6-4.3-dev The following packages have been automatically kept back: dpkg-dev fakeroot libdns35 libisc35 linux-libc-dev patch The following NEW packages will be automatically installed: libgmp3c2 libmpfr1ldbl The following packages have been kept back: adobe-flashplugin bind9-host dnsutils gvfs gvfs-backends gvfs-fuse libatm1 libbind9-30 libgvfscommon0 libisccc30 libisccfg30 liblwres30 libnautilus-extension1 linux-headers-2.6.24-24 linux-headers-2.6.24-24-generic linux-image-2.6.24-24-generic nautilus nautilus-data The following NEW packages will be installed: libgmp3c2 libmpfr1ldbl The following packages will be upgraded: build-essential The following partially installed packages will be configured: timidity 2 packages upgraded, 4 newly installed, 0 to remove and 24 not upgraded. Need to get 775kB/6265kB of archives. After unpacking 20.3MB will be used. The following packages have unmet dependencies: libstdc++6-4.3-dev: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libstdc++6 (>= 4.3.2-1ubuntu11) but 4.2.4-1ubuntu4 is installed. g++-4.3: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: gcc-4.3 (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libc6 (>= 2.8~20080505) but 2.7-10ubuntu4 is installed. g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc-4.3 (>= 4.3.1-1) which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Keep the following packages at their current version: build-essential [11.3ubuntu1 (hardy, now)] g++ [4:4.2.3-1ubuntu6 (hardy-updates, now)] g++-4.3 [Not Installed] libstdc++6-4.3-dev [Not Installed] Score is -9852 Accept this solution? [Y/n/q/?]

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >