Search Results

Search found 24177 results on 968 pages for 'true'.

Page 670/968 | < Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • Check for unique rows, but ignore one particular column

    - by user269148
    I have an XML document, that looks like this: Column A to S with headers, and there are 1922 rows. This is an backup of some SMS, and I want to get rid of duplicates. The problem is, that the Time in the readable_date header has been messed up. There is nothing wrong with the date, but the clock time is wrong, so I have split that column in three, with Year, day and clock. I know I can use a standard filter, but it only looks for unique rows in a single column. What I want to perform, is to make a row check similar to this: F(x)=Check if Column 2A to (infinate) is equal to Column 3A to (infinate), but ignore column(R). IF True, then delete Column 3A to (infinate) Otherwise Check IF column 2A to (infinate) is equal Column 4A to (infinate) and so on. I need to ignore a particular column in a row every time, and need to do this for a complete sheet. And the formula check should apply for every row, when the first one is done checking for duplicates... If anyone else has a better solution, please say so. Anyway, anyone who can help?

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • realtek lan driver problem - win 7

    - by tango _123
    Hi, my new notebook (ACER TM 8371) with Win 7(x32) professional as OS, has strage behaviour with respect to LAN. when I plug or unplug my charger cable, or my notebook goes in stad-by, or just a do nothing, the netwrok connection is lost. All the energy savings functionbality have been in-activated. if i check the network adapter information, it says "a network cabel is unplugged or corrupted", but it is not true.....the cable is in and it is not corrupted (i tried several which works on other machines). if a tried to di-habilitate and hence re-habilitate, it allows di-habilitation and not to further re-habilitate (it says "no driver for the network card" are available). in some cases, also, the driver disappears (i cannot see it anylonger). if i restart the notebook, everything comes back to normality, but.....today in 3 hours i had to restart 6 times!!! can anyone help me? win 7 says the driver is the last one, so no need to update it. here are some techinical information aboiut network card and driver: Name [00000015] Realtek PCIe GBE Family Controller Adapter type Ethernet 802.3 Product type Realtek PCIe GBE Family Controller Installated YES ID device PNP PCI\VEN_10EC&DEV_8168&SUBSYS_02831025&REV_02\4&266E4C4F&0&00E2 last reset 20/11/2009 11:27 Index 15 Service name RTL8167 I/O Port 0x0000FF00-0x0000FFFF Memory address 0xE2500000-0xE2500FFF Memory address 0xE1400000-0xE140FFFF IRQ Channel IRQ 18 Driver c:\windows\system32\drivers\rt86win7.sys (7.3.522.2009, 164,00 KB (167.936 Byte), 28/08/2009 22:12)

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Unable to Align Layers in Photoshop Properly with CS2

    - by Jonathan Sampson
    Cannot Align Semi-Transparent Items? Windows Vista, Photoshop CS2. Steps to repeat: Create new document Fill a circle on a new layer Drop opacity of filled circle to 10% Create new empty layer below circle layer Merge empty layer with filled circle layer Select entire canvas Attempt to align layers to selectionlayer > align layers to selection > vertical centers I get the following error: Could not complete the Vertical Centers command because there are no layers to be moved. Clearly this is not true, as I'm selecting the layer with the semi-translucent ball on it. Now, if you had tried this same command prior to step 5 (when the layer was at 10% opacity) it would have worked. Is there some way around this problem? I need to move layers around that begin as transparent items, with a layer opacity at 100% where 100% of the layers opacity results in showing objects that are themselves not-very opaque. I've confirmed on another machine that this problem doesn't exist in CS3. I may exist in earlier copies of Photoshop, but I only have access to CS2 (has the problem) and CS3 (does not have the problem).

    Read the article

  • SQL Server 2008 Replication: Snap Shot Agent timing out on a specific table

    - by Nai
    Hi all, I am going through the process of setting up a transactional replication on my set up. However, I am unable to generate a full snap shot of my database. I keep getting stuck at the same particular point of the process. From snapshot agent: [46%] The process is running and is waiting for a response from the server Other sources have mentioned increasing the timeout period but I do not think this is the problem as this occurs at around the 10 min mark, way within the default value of 1800s. So, I removed that article/table from the snapshot, and lo and behold, it works. So I was looking at the Schema of the table and in my identity column, the NOT FOR REPLICATION was set to TRUE. All my other tables have it set to FALSE. Could this be the source of the error? Also, I tried changing this to false via the GUI but got received a timeout error. Is there a script that allows me to change this? Thanks all.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Redirection of outbound UDP port.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. I tried, but can't seem to get iptables to do what I want. I'm not sure if it's my lack of skill, or if I'm trying the wrong solution. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • Securing NTP: which method to use?

    - by Harry
    Can someone good at NTP configuration please share which method is the best/easiest to implement a secure, tamper-proof version of NTP? Here are some difficulties... I don't have the luxury of having my own stratum 0 time source, so must rely on external time servers. Should I read up on the AutoKey method or should I try to go the MD5 route? Based on what I know about symmetric cryptography, it seems that the MD5 method relies on a pre-agreed set of keys (symmetric cryptography) between the client and the server, and, so, is prone to man-in-the-middle attack. AutoKey, on the other hand, does not appear to work behind a NAT or a masquerading host. Is this still true, by the way? (This reference link is dated 2004, so I'm not sure what is the state of art today.) 4.1 Are public AutoKey-talking time servers available? I browsed through the NTP book by David Mills. The book looks excellent in a way (coming from the NTP creator after all), but the information therein is also overwhelming. I just need to first configure a secure version of NTP and then may be later worry about its architectural and engineering underpinnings. Can someone please wade me through these drowning NTP waters? Don't necessarily need a working config from you, just info on which NTP mode/config to try and may be also a public time server that supports that mode/config. Many thanks, /HS

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • Create custom launchers in GNOME 3

    - by hochl
    I'm using Debian testing, and I have been switched to GNOME 3 by the Debian update yesterday. I'm not very comfortable with the UI. I wanted to customize everything like I had it with GNOME 2, but I simply couldn't find any way to change preferences like I'm used to. I've digged some, but all answers I could find did not help me achieve my goals. So please, if anyone knows the solution to this I'd be thankful: 1) I want several launchers that launch terminals, with different arguments and different coloring/title. I have searched everything and there seems to be no menu, no right-click, nothing which is standard in any UI I know. How can I create several launchers in this bar on the left side that launch the same application, just with different parameters? With GNOME 2 this was a piece of cake. 2) I want to switch between different terminals using ALT-TAB. Right now, I'm always just getting to the same, already-opened terminal. When I open two terminals by simply creating the second one by issuing xterm &, I still get one Terminal entry with ALT-TAB, and I have to navigate with cursor keys or mouse wheel to select one of the two xterminals. Instead, I want to open a new terminal when I click the quick launch terminal icon from the bar on the left side of the screen and navigate through them like on KDE/GNOME 2/Windows/any reasonable UI. Can this be done? 3) Is there a trick to make bluetooth devices work like on GNOME 2? Right now, my BT keyboard won't pair anymore, which, as you can imagine, makes me pretty angry. and, if anything fails: 4) How can I switch back to GNOME 2 again? :-) Honestly, who did design this? What were they smoking? I feel like I'm not allowed to do anything except start one of any application that has an icon and just with the default parameters. That can't be true, right? I feel massively restrained by this stuff :(

    Read the article

  • ability to see free/busy detail information for conference rooms in Outlook 2007 and Microsoft hosted Exchange solution

    - by Malav
    recently my company migrated from an in-house Exchange server to the Microsoft hosted exchange online solution. My client is Outlook 2007. Before the migration, I could see the details of the meetings when I hovered on the busy blue bar for a resource such as a conference room. I could click on the meetings and see the invite list and the contents of the meeting. Ofcourse if the meeting was marked as private I could not. however after the migration to the online solution, I cannot see the detailed information. I can still see if the room is busy or not but I can no longer see the details of that meeting. The IT folks can see the information and they claim that they can see it because they have full admin rights. It is their claim that in the hosted Exchange solution you can either have full access (admin access) and see the details or not see anything but just that the room is busy. there is no middle ground such as being able to see the details of the meeting but not having any admin rights. For some reason I believe this to be not true. Can someone please verify my doubts and inform me of what needs to be done to see that information if my IT folks are wrong? thanks

    Read the article

  • Firefox won't start

    - by Daniel R Hicks
    OK, I've got this problem again, only this time the problem only seems to affect Firefox and Thunderbird. Rebooted several times. Tried resetting to the last restore point, but that didn't work. Tried setting a new Firefox profile, and that didn't work either. The symptom is that you click on the Firefox or Thunderbird icon, the process appears in the Process Explorer list, but the window never opens. Curiously, if Firefox has been "started" this way, Internet Explorer hangs starting until I kill the Firefox process. Any ideas? I suppose the next thing to try is uninstalling and reinstalling Firefox/Thunderbird, but this whole thing is getting old. The box is a Sony Vaio running Windows Vista. It was completely restored from scratch less than two weeks ago, after the last fiasco. (I'm suspecting that my aborted install of Acronis True Image may have mucked things up this time.) Sigh! Another symptom: It occurred to me to try printing something, but if I open "Printers" it just sits there "searching". So something is rotten in the bowels of Windows. Minor update: It occurred to me to kill Internet Explorer (where I'd attempted printing). Then Printers comes up fairly quickly -- with no printers defined. Clicking "Add a printer" does nothing. Update: Well, following this suggestion to stop and restart the print spooler brought the printers back. And, wonder of wonders, Firefox now starts OK. Stopping and restarting the print spooler!!

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • MacBook Pro (2007 - ATI-Card) Screen goes half black then shuts down the entire computer

    - by BluePerry
    Hey, when I'm starting my MBP I get about 10-60 seconds of the booting process before the screen goes half black (Its like applying a black to transparent gradient from the left side of the screen to the middle of the screen). After a 5 seconds the entire screen blacks out and the MBP shuts down. There have been issues with heat and some minor artifacts on the screen in the past but nothing serious as far as I know. Everything could be resolved by rebooting or letting the MBP cool down a bit. I would like to identify the problem now. There are 3 possibilities I can think of right now: My graphics card is defect and shuts down after a few seconds. (But if thats true, I find it kind of hard to explain the HALF black screen) My LCD-Panel is defect and sends back wrong signals so the system shuts down??? (I not sure about that) My fan or/and thermal sensors are defect and forcing the graphics card to shut down. Can anyone point if I'm right or if there yet another reason for this. I'd be thankful for any hint or tips. Cheers, Per

    Read the article

  • Why does 3.5 mm audio out work through headphones but not through external speakers?

    - by Rickster
    I have a computer that has a 3.5 mm audio jack on the front of it. The computer itself has no speakers, so this is the only way to hear sound. If I plug headphones into it, the audio properly plays through the headphones, and if I plug in external speakers it used to play through them as well. Just today I turned on my computer and the audio no longer plays through the speakers, but if I plug in the headphones instead it works. The speakers aren't broken, as both the speakers and headphones work in my iPod and play music. I thought that 3.5 mm jacks could not send data back to the computer, and the computer had no way of differentiating between different devices plugged into the jack. If this is true, how is it that the computer plays audio through the headphones but not through speakers plugged into the same 3.5 mm jack, and both devices are functional? Or is my knowledge on 3.5 mm jacks incorrect? I don't believe drivers are important, as the same driver runs the 3.5 mm jack for all devices, but if necessary I can provide additional information. Any ideas would be appreciated. Thanks!

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

< Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >