Search Results

Search found 24568 results on 983 pages for 'high load'.

Page 770/983 | < Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >

  • Using a GoDaddy SSL certificate with Virtualmin (Webmin)

    - by Kevin
    A client of mine decided to go ahead and move from a self-signed certificate to a commercial one ("GoDaddy Standard SSL"). The first service I wanted to move to the commercial SSL cert was Webmin/Usermin... However, upon migrating to the new SSL cert and restarting Webmin, I got the following error: [21/Oct/2012:13:12:47 -0400] Restarting Failed to open SSL cert /etc/webmin/miniserv.cert at /usr/share/webmin/miniserv.pl line 4229. Error: Webmin server did not write new PID file And that's all it says. Here's Webmin's config file (/etc/webmin/miniserv.conf): port=10000 root=/usr/share/webmin mimetypes=/usr/share/webmin/mime.types addtype_cgi=internal/cgi realm=Webmin Server logfile=/var/webmin/miniserv.log errorlog=/var/webmin/miniserv.error pidfile=/var/webmin/miniserv.pid logtime=168 ppath= ssl=0 env_WEBMIN_CONFIG=/etc/webmin env_WEBMIN_VAR=/var/webmin atboot=1 logout=/etc/webmin/logout-flag listen=10000 denyfile=\.pl$ log=1 blockhost_failures=5 blockhost_time=60 syslog=1 session=1 server=MiniServ/1.600 userfile=/etc/webmin/miniserv.users keyfile=/etc/webmin/miniserv.pem passwd_file=/etc/shadow passwd_uindex=0 passwd_pindex=1 passwd_cindex=2 passwd_mindex=4 passwd_mode=0 preroot=virtual-server-theme passdelay=1 sudo=1 sessiononly=/virtual-server/remote.cgi preload=virtual-server=virtual-server/virtual-server-lib-funcs.pl virtual-server=virtual-server/feature-unix.pl virtual-server=virtual-server/feature-dir.pl virtual-server=virtual-server/feature-dns.pl virtual-server=virtual-server/feature-mail.pl virtual-server=virtual-server/feature-web.pl virtual-server=virtual-server/feature-webalizer.pl virtual-server=virtual-server/feature-ssl.pl virtual-server=virtual-server/feature-logrotate.pl virtual-server=virtual-server/feature-mysql.pl virtual-server=virtual-server/feature-postgres.pl virtual-server=virtual-server/feature-ftp.pl virtual-server=virtual-server/feature-spam.pl virtual-server=virtual-server/feature-virus.pl virtual-server=virtual-server/feature-webmin.pl virtual-server=virtual-server/feature-virt.pl virtual-server=virtual-server/feature-virt6.pl anonymous=/virtualmin-mailman/unauthenticated=anonymous premodules=WebminCore logouttimes= extracas=/etc/webmin/miniserv.chain certfile=/etc/webmin/miniserv.cert ssl_redirect=0 Here is a screen shot of the Webmin SSL config screen as well, for what it's worth: http://postimage.org/image/r472go7tf/ Edited Mon Oct 22 10:45:24 CDT 2012: When running the command openssl x509 -noout -text -in /etc/webmin/miniserv.cert as Falcon Momot suggested, I get the following error: unable to load certificate 139760808240800:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:696:Expecting: TRUSTED CERTIFICATE

    Read the article

  • Apache is reponding a blank white page

    - by Bruno Araujo
    I have the following situation: A site hosted in apache 2.4, with ssl, that works like a charm for a while now, but out of no where, without modifications to the site, apache started serving random blank pages. The workaround this is to delete the cookies of the browser or restart the browser. I've switched the vitualhost to log in debug mode but it didn't got me anywhere. Here is the debug log of a failed page load: [Wed Oct 24 10:57:35.762547 2012] [ssl:info] [pid 27854:tid 140617706374912] [client 192.168.10.150:58917] AH01964: Connection to child 147 established (server xxx.com.br:443) [Wed Oct 24 10:57:35.762739 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1966): [client 192.168.10.150:58917] AH02043: SSL virtual host for servername xxx.com.br found [Wed Oct 24 10:57:35.777479 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1899): [client 192.168.10.150:58917] AH02041: Protocol: TLSv1, Cipher: DHE-RSA-AES256-SHA (256/256 bits) [Wed Oct 24 10:57:35.779912 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(243): [client 192.168.10.150:58917] AH02034: Initial (No.1) HTTPS request received for child 147 (server xxx.com.br:443) [Wed Oct 24 10:57:35.780044 2012] [authz_core:debug] [pid 27854:tid 140617706374912] mod_authz_core.c(809): [client 192.168.10.150:58917] AH01628: authorization result: granted (no directives) [Wed Oct 24 10:57:40.783950 2012] [ssl:info] [pid 27854:tid 140617706374912] (70007)The timeout specified has expired: [client 192.168.10.150:58917] AH01991: SSL input filter read failed. [Wed Oct 24 10:57:40.784077 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_io.c(988): [remote 192.168.10.150:58917] AH02001: Connection closed to child 147 with standard shutdown (server xxx.com.br:443)

    Read the article

  • FFMpeg-PHP Installation Error

    - by tundoopani
    While installing FFmpeg-PHP, I got this interesting error: /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioStreamId': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1051: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioChannels': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1089: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioSampleRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1125: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getAudioBitRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1161: error: 'CODEC_TYPE_AUDIO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie_getVideoBitRate': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1181: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_read_av_frame': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1204: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1215: warning: implicit declaration of function 'avcodec_decode_video' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1219: error: 'PKT_FLAG_KEY' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_get_av_frame': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1246: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1282: error: 'AVCodecContext' has no member named 'hurry_up' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1284: error: 'AVCodecContext' has no member named 'hurry_up' /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c: In function '_php_get_sample_aspect_ratio': /usr/downloads/ffmpeg-php-0.6.0/ffmpeg_movie.c:1443: error: 'CODEC_TYPE_VIDEO' undeclared (first use in this function) make: *** [ffmpeg_movie.lo] Error 1 When I ran php -r 'phpinfo();' | grep ffmpeg, I got this: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/ffmpeg.so' - libavformat.so.52: cannot open shared object file: No such file or directory in Unknown on line 0 Any idea how I can fix this? I am running on Centos. Thanks in advance :)

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • Strange ZFS hidden filesystem problem

    - by RandomInsano
    Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still mount them and read and write files, but they just plain don't show up in 'zfs list'. I've mounted the major ones, but I'm not sure what other filesystems there are anymore (I have about eight that I can't see). Anyone have any idea what the heck is going on? I think I might try booting back into FreeBSD 8 (I still have the main boot drive laying around for that) and see if at least it is able to view the filesystems. I've also done a scrub while in Linux, and it found no errors with any of the data. Oddly, DMA read errors which caused problems on FreeBSD ZFS are reported by Linux, but ZFS-fuse doesn't find an error. That's a topic for another post however.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • Problem: Munin Graph

    - by Pablo
    I've been trying to install Munin for 15 days, I looked for information, analized logs, I even deleted and reinstalled Munin using YUM. I'm hosted at Media Temple on a VPS with CentOS. The problem is still there and It's driving me nuts. Graphics are shown as following: http://imageshack.us/photo/my-images/833/capturadepantalla201106u.png/ This is the configuration of my munin.conf file dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin [localhost] address **.**.***.*** #IP VPS This is the configuration of my munin-node.conf file log_level 4 log_file /var/log/munin/munin-node.log port 4949 pid_file /var/run/munin/munin-node.pid background 1 setseid 1 # Which port to bind to; host * user root group root setsid yes # Regexps for files to ignore ignore_file ~$ ignore_file \.bak$ ignore_file %$ ignore_file \.dpkg-(tmp|new|old|dist)$ ignore_file \.rpm(save|new)$ allow ^127\.0\.0\.1$ Thanks so much, I appreciate all the answers UPDATE munin-graph.log Jun 22 16:30:02 - Starting munin-graph Jun 22 16:30:02 - Processing domain: localhost Jun 22 16:30:02 - Graphed service : open_inodes (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailtraffic (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : apache_processes (0.12 sec * 4) Jun 22 16:30:02 - Graphed service : entropy (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailstats (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : processes (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : apache_accesses (0.27 sec * 4) Jun 22 16:30:03 - Graphed service : apache_volume (0.15 sec * 4) Jun 22 16:30:03 - Graphed service : df (0.21 sec * 4) Jun 22 16:30:03 - Graphed service : netstat (0.19 sec * 4) Jun 22 16:30:03 - Graphed service : interrupts (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : swap (0.14 sec * 4) Jun 22 16:30:04 - Graphed service : load (0.11 sec * 4) Jun 22 16:30:04 - Graphed service : sendmail_mailqueue (0.13 sec * 4) Jun 22 16:30:04 - Graphed service : cpu (0.21 sec * 4) Jun 22 16:30:04 - Graphed service : df_inode (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : open_files (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : forks (0.13 sec * 4) Jun 22 16:30:05 - Graphed service : memory (0.26 sec * 4) Jun 22 16:30:05 - Graphed service : nfs_client (0.36 sec * 4) Jun 22 16:30:05 - Graphed service : vmstat (0.10 sec * 4) Jun 22 16:30:05 - Processed node: localhost (3.45 sec) Jun 22 16:30:05 - Processed domain: localhost (3.45 sec) Jun 22 16:30:05 - Munin-graph finished (3.46 sec)

    Read the article

  • Random servers in Citrix servers suddenly bluescreens (mostly 0x0000008e and 0x0000007e)

    - by Rasmus Rask
    I'm responsible for a Citrix Presentation Server 4.5 farm. Starting Friday 30. November, my servers started to crash randomly. So far we've experienced 80 crashes, so it's obviously becoming an increasingly big problem for us. I have 12+ years experience with IT, so I know the difference between 0 and 1, but I have a hard time cracking this. We've rolled back any recent changes I can think of for different groups of servers, but all groups still seem to crash. I don't have the skills to interpret the memory dumps to find the culprit. Has anyone encountered the same or a similar problem? - might be a generic Windows issue Other than executing "analyze -v" in WinDbg, how do I work my way through the memory dumps to see what actually triggered the BSOD? Any suggested steps in getting to the bottom of this? Any help is greatly appreciated. I can also provide links to kernel memory dumps or WinDbg output if necessary. Thanks! Problem description The majority of the STOP errors we encounter are: 0x0000008e KERNEL_MODE_EXCEPTION_NOT_HANDLED (50%) 0x0000007e SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (26%) 0x00000050 PAGE_FAULT_IN_NONPAGED_AREA (21%) We also see a few 0x0000000a IRQL_NOT_LESS_OR_EQUAL (3%). For both 0x0000008e and 0x0000007e bug checks, the exception code is 0xc0000005 (Access Violation). When opening dump files in WinDbg, most details are exactly the same, for all the 0x0000008e and 0x0000007e bug checks respectively: 0x0000008e Exception address: 0x808bc9e3 Trap frame: [varies] FAILURE_BUCKET_ID: 0x8E_nt!HvpGetCellMapped+97 Probably Caused by (IMAGE_NAME): ntkrpamp.exe 0x0000007e Exception address: 0x808369b6 Exception record address: 0xf70d3be0 Context record address: 0xf70d38dc FAILURE_BUCKET_ID: 0x7E_nt!MmPurgeSection+14 Probably Caused by: memory_corruption About 30% of the crashes happens between 17:00 and 19:00, which leads me to believe this tend to happen more often during logoffs. But then again, only ~15% occurs between 15:00 and 17:00. Summary of farm Citrix Presentation Server 4.5 R06 on Windows Server 2003 R2 SP2 All high priority patches, at least as of October installed Virtualized using VMWare ESX/vSphere 4.1 on HP Proliant BL460c G6 blade servers About 53 Presentation Servers in production, divided into three silos - only one of which, the largest, is affected 2 vCPU's (5 GHz reserved), 8 GB RAM (all reserved) for each Presentation Server Plenty of free disk space Very few printer drivers - automated deletion of non-approved drivers every night ~1.000 peak concurrent users, which is reached at around 10:30 (on weekdays) Number of sessions steadily decline between 15:00 and 19:00 to ~230

    Read the article

  • mod_perl loses STDOUT in middle of request

    - by puzzled72
    Hi, I have been having this weird issue where mod_perl seems to lose STDOUT in the middle of the request. So far I have eliminated everything I could think of. You might have seen this bug related to the following errors in error_log : Apache2 IO flush: (103) Apache2::RequestIO::read: (104) Software caused connection abort They are all the same error. It happens when the perl script running under mod_perl loses STDOUT when trying to print the result back to apache. I only notice this error on my servers running the following: (centos5.4) Perl 5.8.8-27 mod_perl 2.0.4-6 httpd 2.2.3-31 kernel-2.6.18-164.15.1 It's not the code This code has been working for months It's not network related The browser gets the error response from apache. It's not time related I get the error 15 or so seconds after I restart httpd It's not idle httpd related I have tried reducing the min/max SpareServers to 1 It's not load related I get the error even if there are only 10 sessions on httpd It's not related to the "fd < PERLIO_MAX_REFCOUNTABLE_FD" perl 5.8.8 bug I recompiled perl-5.8.8 with the patch mentioned here : https://bugzilla.redhat.com/show_bug.cgi?id=559832, same error. It appeared sometime between December 2009 and February 2010 sorry I cannot be more specific Anyone has any idea ? Anything that I have not tested ? Really Puzzled!

    Read the article

  • How to run multiple instances of Tor?

    - by Ed
    I'm trying to set up a special proxy server (running Windows). It will have several instances of Privoxy and Tor running and my app will choose which Privoxy instance to send HTTP requests to depending on the load. Privoxy will then forward them to Tor. I'm using srvany.exe to create the services. At the moment I'm running 3 Privoxy and 3 Tor services (I copied the binaries to different folders). Each Privoxy service is listening to its own port (8118, 8119, 8120). I can see them listening in a port scanner. This is the application path (for srvany in registry) for the 1st service: C:\Anonymiser\Privoxy 01\privoxy.exe --service I've also configured the Tor services to listen to different ports (9050, 9052, 9054). This is the application path for the 1st service: C:\Anonymiser\Tor 01\tor.exe -f "C:\Anonymiser\Tor 01\torrc" The problem is, when I start the Tor services, only the first service I start is listening to its port. The others aren't listening. They listen if I run them separately. Any ideas what could be wrong? How can I make all 3 services listen on their assigned ports? This is one of my Privoxy configs: confdir . logdir . logfile privoxy.log debug 1 # show each GET/POST/CONNECT request debug 4096 # Startup banner and warnings debug 8192 # Errors - we highly recommended enabling this listen-address localhost:8118 toggle 0 enable-remote-toggle 0 enable-remote-http-toggle 0 enable-edit-actions 1 buffer-limit 4096 forwarded-connect-retries 0 forward-socks4a / localhost:9050 . This is one of my Tor configs: ControlPort 9051 Log notice stdout SocksListenAddress localhost SocksPort 9050 EDIT: Found a workaround. The Tor binary wants a lock on a file in the AppData folder. Because all of them want a lock on the same file, only the first one I start will be working. The workaround is to run each Tor instance under a different account. Not the best solution, but it works.

    Read the article

  • Use Windows/Mac MySQL GUI over SSH Tunnel

    - by Marcin
    I am working on a client's website and he has hosting through 1and1. They don't allow connecting directly to their mySQL server from anywhere. I can't for instance load up a mySQL GUI on windows and just connect and work on the databases, it says host not found. His hosting account on the other hand is given access to the mySQL server even though it is in a different location. Let's say these are the servers I'm working with: His main hosting: Address: thehost.com Username: joe His mySQL server: Address: mysqlserver.com Port: 3306 Database: thedata User: dbouser The main hosting account he has comes with SSH. So if I SSH into thehost.com on port 22 and then use the mysql command to connect to mysqlserver.com, it works. I have tried to set up SSH tunneling but the problem is that it's not the mySQL server that has SSH allowed, it's the main hosting. How do I set up SSH Tunneling on both a Mac and a Windows machine so that I can run any GUI I want and I will be able to connect to the mysqlserver.com server. All based on the information above that SSH access is to thehost.com only, and thehost.com itself can connect to mysqlserver.com.

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • Office Communicator and cannot sync Address book error

    - by Noah
    We are trying to get OCS 2007 R2 up and running. The clients login fine, but when I let it sit for a while, we still get the address book sync error message of: "Cannot synchronize with the corporate address book. This may be because the proxy server setting in your web browser does not allow access to the address book. If the problem persists, contact your system administrator". When I try and download the file locally, this error comes up: Could not load file or assembly 'ABServerHttpHandler, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) I googled and came across this post (http://social.technet.microsoft.com/Forums/en/ocsaddressbook/thread/c28ff2d8-66a4-456c-a5ad-e445a667e8ed) which suggests removing and reinstalling .NET 2.0 but that didn't seem to resolve the issue either. When we run abserver.exe -validateDB it works properly. We even tried the suggestion from Greg's Blog (http://blogs.technet.com/greganth/archive/2009/03/11/office-communicator-notifications-cannot-synchronize-address-book.aspx) about restarting the web component services but that didn't work either. Still seeing the same issue. So does anyone have an idea of where we go from here?

    Read the article

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • Why does my dd backup of MacBook OS X fail to boot upon restore?

    - by James
    I created a backup of a MacBook hard drive (WD2500BEVS-88US) by hooking it up as a secondary drive on my linux system (Ubuntu 10.10). I used the following command: sudo dd if=/dev/sdc of=/home/backup.img bs=2M This appears to have completed with no errors. I noticed that the file is only 68 GB in size even though the drive is 250 GB in capacity. I restored the image to a spare drive (WD2500BEVS) with the following command: sudo dd if=/home/backup.img of=/dev/sdb bs=2M When I boot the spare drive in the Mac, it appears to start up for a few seconds and then shuts down. (It does not appear to load into the OS at all). When I open up the drive that won't boot in GParted, it looks like this: When looking at the information for the middle partition with the little red exclamation mark, it shows this: The original hard drive that boots ok shows up like this: Further info on both drives: sudo fdisk -l Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 30402 244198580 ee GPT WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 30402 244198580 ee GPT So why is my backup or restore failing? Why is dd not creating a byte for byte duplicate?

    Read the article

  • 3d Studio Max and 2+CPUs - Core limit ?

    - by FreekOne
    Hi guys, I am scouting for parts to put in a new machine, and in the process, while looking at different benchmarks I stumbled upon this benchmark and it got me a bit worried. Quote form it: Noticably absent from this review is an old-time favorite, 3ds Max. I did attempt to run our custom 3ds Max benchmark on both the 2009 and 2010 versions of the software, but the application would simply not load on the Westmere box with hyper-threading enabled. Evidently Autodesk didn't plan far enough ahead to write their software for more than 16 threads. Once there is an update that addresses this issue, I will happily add 3ds Max back into the benchmarking mix. Since I was looking at dual hexa-core Xeons (x5650), that would put my future machine at 24 logical cores which (duh) is well over 16 cores and since I'm mostly building this for 3DS Max work, you can see how this would seriously spoil my plans. I tried looking for additional information on this potential issue, but the above article seems to be the only one who mentions it. Could anyone who has access to a 16 core machine or an in-depth knowledge about 3DS Max please confirm this ? Any help would be much appreciated !

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

< Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >