Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 541/1008 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Windows 7 .NET 3.5.1 - 2.0 Slightly Corrupted, How to Repair?

    - by Quinxy von Besiex
    My Windows 7 included .NET installation (3.5 to 2.0) appears very slightly and particularly corrupted and I am trying to fix it without reinstalling Windows or trying to revert to backups. Everything was working and then my hard drive started corrupting a few files and checkdisk found bad clusters so I imaged the drive to a new one. As soon as I booted on the new drive everything worked except programs which call the System.Net.NetworkInformation methods within .NET 3.5 to 2.0 (like Ping() and IsNetworkAvailable()), which immediately crash the app in which the calls are (those calls in .NET 4.0 works fine). Those methods are found inside System.dll, and I assume call native methods which I believe are inside winnsi.dll or iphlpapi.dll or something else (I've not found this yet); I assume it calls native methods because the exception which causes the crash is Fatal Execution Engine Error which people mention is usually related to calling native methods and marshaling data between them. A huge clue about the culprit is likely found in the fact that when I launch the exact same crashing application through a code profiler (which executes the exe and captures stats on which methods took the longest) the app works fine, no crash at all! How could running it within the profiler work and running it outside not work? That seems the key to the mystery. I've used procmon to catch all the registry, filesystem, and network events from the crashing execution and the profiler-run successful execution and compared the two outputs but didn't learn much (I see the moment at which the non-profiled app crashes, but up until then they behave the same, loaded the same modules, ). The only big difference seems to be that at the moment before the app crash the profiler-executed code creates 4-6 new threads and the directly executed code only creates 1-2. I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption). I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant. I have created a new user account and cleaned up any environment variables in case environment was related. No change. I did "sfc /scannow" and it found no integrity problems. I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed. I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult. I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc. Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it. Thanks! Quinxy

    Read the article

  • Kernel Mode Rootkit

    - by Pajarito
    On the other 3 computers in my family, I believe that we have a kernel-mode rootkit for windows. It appears that the same rootkit is on all of them. We think. We changed all the important passwords from my computer, running linux right now. On all of the infected computers is Symantic Endpoint Protection, because it's free from the university where my mom and dad work. In my opinion symantec is a piece of crap, seeing as it didn't even manager to delete the tracking cookies it found when I tried it on my own computer. The Computers and their set-ups: Computer A: Vista Business; symantec antivirus. runs it as admin, no password. IE8. no other security software other than what comes with windows. IE8 security settings the default Computer B: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, spybot, uses IE8 with default settings, sometimes Firefox Computer C: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, uses IE8 with default settings, no other security programs except what came with windows This is what's happening. Cut and pasted from my dad's forum post. -- When I scanned my laptop (Dell XPS M1330 with Windows Vista Small Business), Symantec Endpoint Protection hangs for a while, perhaps 10 seconds or so, on some of the following files 9129837.exe, hide_evr2.sys, VirusRemoval.vbs, NewVirusRemoval.vbs, dll.dll, alsmt.ext, and _epnt.sys. It does this if a run a scan that I set up to run on a new thumbnail drive and it does this even if the thumbnail is not plugged in. It doesn't seem to do this if I scan only the C: drive. I've check for problems with symantec endpoint protection and also with Microsoft Security Essentials and Malwarebytes Anti-Malware. They found nothing and I can't find anything by searching for hidden files. Next I tried microsoft's rootkitrevealer. It (rootkitrevealer) finds 279660 (or so) discrepancies and the interface is so glitchy after that I can't really figure out what is going on. The screen is squirrely. The rootkitrevealer pulls up many files in the folder \programdata\applicationdata and there are numberous appended \applicationdata on the end of that as well. -- As you can see, what we did was install MSE and MBAM and scan with both of them. Nothing but a tracking cookie. Then I took over and ran rootkitrevealer.exe from MicroSoft from a flash drive. It found a bunch of discrepancies, but only about 20 or so where security related, the rest being files that you just couldn't see from Windows Explorer. I couldn't see whether of not the files list above, the ones that the scan was hanging on, where in the list. The other thing is, I have no idea what to do about the things the scan comes up with. Then we checked the other computers and they do the same thing when you scan with Symantec. The people at the university seen to think that dad might not have a virus, but 2 of the computers slowed down noticably AND IE8 started acting all funny. None of my family is very computer oriented, and 2 of the possible causes for the rootkit are: -My dad bought a new flash drive, which shipped with a data security executable on it -My dad has to download lots of articles for his work Those are the only things that stand out, but it could have been anything. We are currently backing up our data, and I'll post again after trying IceSword 1.22. I just looked at my dad's forum topic, and someone recommended GMER. I'll try that too.

    Read the article

  • CENTOS 6 - How to install php-mysql when php-common @remi is present?

    - by Multitut
    I am having troubles adding mysql support for my php installation, this installation was made using a ready to use-package that came with our VPS. This is my php.info: http://snake.quetzalcoatech.com/info.php I am trying to install php mysql using: yum install php-mysql And get this output: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.serveraxis.net * extras: mirror.fdcservers.net * updates: bay.uchicago.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mysql.x86_64 0:5.3.3-14.el6_3 will be installed --> Processing Dependency: php-common = 5.3.3-14.el6_3 for package: php-mysql-5.3.3-14.el6_3.x86_64 --> Finished Dependency Resolution Error: Package: php-mysql-5.3.3-14.el6_3.x86_64 (updates) Requires: php-common = 5.3.3-14.el6_3 Installed: php-common-5.3.17-2.el6.remi.x86_64 (@remi) php-common = 5.3.17-2.el6.remi Available: php-common-5.3.3-3.el6_2.8.x86_64 (base) php-common = 5.3.3-3.el6_2.8 Available: php-common-5.3.3-14.el6_3.x86_64 (updates) php-common = 5.3.3-14.el6_3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I am a noob using Linux, so could you tell me which command should I use to install a compatible php-mysql module? Thank you so much!

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Google Play Music Not Adding MP3s On-Demand

    - by J0e3gan
    My recent attempts to add music on-demand to Google Play Music have yielded nothing - no "Processing music..." or "Added __ of __" messages, just nothing. Previously I could add music on-demand; and nothing has changed on the machine from which I successfully added music previously, from which I have tried to add music on-demand recently. What could be hampering my ability to add music on-demand? WHAT I'VE TRIED: Right after I started using GPM, I briefly found that I could not add music (on-demand), but the problem went away after a logout/login. This time a logout/login has not helped. Dragging & dropping or browsing to folders or files to add has made no difference either. Nor has waiting ridiculously long for GPM to show signs of life after adding music on-demand seemed to work. Digging deeper, I read a related Google Play Help article and followed its suggestions... ran the Google Play Music Manager troubleshooter = no errors or warnings double checked my available storage = 8 GB free double checked supported file types = MP3 is still supported (of course) ..., but the problem remains. UPDATE: I found that if I configure GPM to automatically upload music added to specific folders, it strangely does add automatically what it will not add on-demand.

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • Group Policy GPO not 'seen' at client

    - by fukawi2
    I have a new OU (natorg.local\NATO\Users) that I am trying to apply GP to. I have created a new user in this OU, and linked the 3 GPO's to this OU: DESKTOP - Folder Redirection (AppData) DESKTOP - Folder Redirection (Desktop) DESKTOP - Folder Redirection (Documents) Hopefully the names are sufficient to suggest what they do exactly. The settings are under User Settings so there is no Loopback processing required (if my understanding is correct). GP Modelling for the user and specific computer says that the GPOs will/should be applied, however on the client, gpresult doesn't even appear to see the GPOs under either "Applied" or "Not Applied": USER SETTINGS -------------- CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local Last time Group Policy was applied: 25/06/2012 at 11:07:13 AM Group Policy was applied from: svr-addc-01.natorg.local Group Policy slow link threshold: 500 kbps Applied Group Policy Objects ----------------------------- LAPTOPS - Power Settings WSUS - Set Server Address OUTLOOK - Auto Archive SECURITY - Lock Screen After Idle Default Domain Policy DESKTOP - Regional Settings NETWORK - Proxy Configuration NETWORK - IE General Config OFFICE - Trusted Locations OFFICE - Increase Privacy OUTLOOK - Disable Junk Filter DESKTOP - Disable Windows Error Reporting DESKTOP - Hide Language Bar NETWORK - Disable Skype DESKTOP - Disable Thumbs.db Creation WSUS - Set Server Address The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- Local Group Policy Filtering: Not Applied (Empty) NETWORK - Google Chrome Configuration Filtering: Not Applied (Empty) SYSTEM - Event Log Configuration Filtering: Not Applied (Empty) SECURITY - Local Administrator Password Filtering: Not Applied (Empty) NETWORK - Disable Windows Messenger Filtering: Not Applied (Empty) SECURITY - Audit Policy Filtering: Not Applied (Empty) WSUS - Automatic Install Filtering: Not Applied (Empty) NETWORK - Firewall Configuration Filtering: Not Applied (Empty) DESKTOP - Enable Offline Files Filtering: Not Applied (Empty) I haven't altered permissions on the GPO's at all, no WMI filtering... As I said, GP Modelling says that they should be applied. GPResult on the client correctly identifies itself as being the correct OU (CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local) There are 2 x 2008R2 and a 2003 DC, domain is 2003 level, client is Windows XP SP3. Can anyone suggest why these GP Objects would be "invisible" to the client?

    Read the article

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • Getting error ArrayIndexOutOfBoundsException: 2048 at everything I'm doing in Eclipse

    - by Bernhard V
    Hi, whatever I'm doing in Eclipse, I get an error. At start up I get an error at Java tooling initializing. I get an error when I want to open a type. And it's always the same error. For example, when opening a type I get: An internal error occurred during: "Cache refresh". 2048 The error at the start up also prints the error code as 2048. I'm using the most up to date version of Eclipse. Do you know a way to fix this issue? edit: Here the stack trace of the error at Java tooling initializing: java.lang.ArrayIndexOutOfBoundsException: 2048 at org.eclipse.jdt.internal.core.index.DiskIndex.readStreamChars(DiskIndex.java:870) at org.eclipse.jdt.internal.core.index.DiskIndex.initialize(DiskIndex.java:370) at org.eclipse.jdt.internal.core.index.Index.<init>(Index.java:96) at org.eclipse.jdt.internal.core.search.indexing.IndexManager.getIndex(IndexManager.java:248) at org.eclipse.jdt.internal.core.search.indexing.IndexManager.getIndexes(IndexManager.java:309) at org.eclipse.jdt.internal.core.search.PatternSearchJob.getIndexes(PatternSearchJob.java:81) at org.eclipse.jdt.internal.core.search.PatternSearchJob.ensureReadyToRun(PatternSearchJob.java:50) at org.eclipse.jdt.internal.core.search.processing.JobManager.performConcurrentJob(JobManager.java:174) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.searchAllTypeNames(BasicSearchEngine.java:1122) at org.eclipse.jdt.core.search.SearchEngine.searchAllTypeNames(SearchEngine.java:713) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog$ConsistencyRunnable.refreshSearchIndices(FilteredTypesSelectionDialog.java:653) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog$ConsistencyRunnable.run(FilteredTypesSelectionDialog.java:636) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog.reloadCache(FilteredTypesSelectionDialog.java:679) at org.eclipse.ui.dialogs.FilteredItemsSelectionDialog$RefreshCacheJob.run(FilteredItemsSelectionDialog.java:1502) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • Flash alternative for iBook Mac?

    - by Hunter Dolan
    I have a old Apple iBook G4 that I decided to hook up to my main TV. I like the setup because I can surf the internet on my TV now. The only thing that I can't seem to do is watch Flash videos. Apparently Flash Player 10 doesn't play nice with the iBook's graphics card's GPU, leaving all the graphics processing to the CPU which is a disaster. Others suggested downgrading to Flash Player 9, I did that, and youtube worked fine, but Hulu (The main reason I wanted to hook it up to the TV in the first place) did not. Anyone know of a Flash alternative or a Flash 10 fix for the iBook? Or even a Hulu client that doesn't require Flash. Here are my iBook's Specs Model Name: iBook G4 <br> Model Identifier: PowerBook6,5 <br> Processor Name: PowerPC G4 (1.2) <br> Processor Speed: 1.2 GHz <br> Number Of CPUs: 1 <br> L2 Cache (per CPU): 512 KB <br> Memory: 512 MB <br> Bus Speed: 133 MHz <br> Boot ROM Version: 4.8.7f1 <br> Mac OS X Version: 10.5.8 <br> PS: Don't tell me that I need to buy a new computer. I know that I would have better results with a new computer but I don't want to buy a new computer just for Hulu.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • virtualbox snapshot size

    - by intuited
    I've started using Windows 7 under VirtualBox on an Ubuntu 10.10 host. I took about 6 snapshots over the course of setting up the VM from the Windows restore image that came with the computer. My installations were more or less limited to windows updates, antivirus, and the VB Guest Additions. I uninstalled much more than I installed. The VM was running for about 24 hours total. The snapshots increased in size at a worrisome rate, even when the machine was idle: the snapshot .vdi file for the period between 11:22 PM and 9:02 AM is 6 gigs in size; during that time very little happened. The other .vdi files are between 0.5 and 3 GB, most between 1 and 2 GB. The corresponding .sav files are between 0.5 and 1 GB. The Internet connection where I was doing this is limited to 30KB/s download, which, constantly saturated, works out to less than 3 GB per 24 hour period. Is this normal? Is there something that can be done to make snapshots more practical? update On starting up the VM again, I've noticed that mscorsvw is using significant processing time. Apparently this process [precompiles .NET assemblies]. This may have been going on during the period when I was taking snapshots, which might explain some of the snapshot size increase. I would be somewhat surprised to learn that this could be responsible for over 10 GB of additional disk usage, or that it would run for roughly 24 hours. Is this possible?

    Read the article

  • iRedMail home setup - use different SMTP relay for different destination domains

    - by John
    Hello helpful server folks, I'm messing with iRedMail. I've mostly been successful, I think I have an SMTP problem. I have changed RoundCube (webmail) to use BrightHouse's, my ISP's, SMTP server for outgoing. It works fine, I click send and poof, I have gmail. I can reply from gmail to my email server, and it works. It took 10 hours for the email to show up, which is a different problem, I think, but it does work. But when I send from my server TO my own server, my ISP's Postmaster account sends me a cryptic blurb. I just got off the phone with them, and they say it "should work", and that they can't reach my pop3 server. (pop3, pop3s, imap, and imaps are all open on my router and forwarded to the server, I'm not sure what I need, I'm just covering my bases...) pop3 and/or imap as external interfaces are just formalities, I really just want webmail to work. Roundcube only takes one SMTP server in its configs. How can I configure Postfix to relay / forward emails to my ISP's SMTP, while taking messages bound for my own domain and processing them? Since my ISP won't let me "bounce" my emails off of it. Maybe I'm vastly misunderstanding how e-mail works in general: To receive mail, I should only need port 25, SMTP, open to the internet, correct? Should I be concerned about some authentication failure from the outside to my relay? (My relay requires user/pass to use, my ISP's requires none.)

    Read the article

  • Debian Squeeze vzquota

    - by benjamin
    Hello, Apparently, I got Debian Squeeze (Debian 6) to work on a VPS using debootstrap and chroot as described here. Subsequent installation of the harden, exim4, mysql-server packages failed partially. Relevant information: insserv: warning: script 'S10vzquota' missing LSB tags and overrides insserv: warning: script is corrupt or invalid: /etc/init.d/../rc6.d/S00vzreboot insserv: warning: script 'vzquota' missing LSB tags and overrides insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: loop involving service stop-bootlogd at depth 2 insserv: loop involving service vzquota at depth 1 insserv: loop involving service rsyslog at depth 1 insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing exim4-base (--configure): subprocess installed post-installation script returned error exit status 1 Any suggestions? Keywords: vzquota debian squeeze installation vps, virtual private server.

    Read the article

  • Pentium 4 Willamette vs. Faster Celeron Northwood [closed]

    - by Synetech inc.
    Which is the preferable of the following two processors? Intel® Pentium® 4 Processor 1.70 GHz, 256K Cache, 400 MHz FSB Willamette Intel® Celeron® Processor 2.40 GHz, 128K Cache, 400 MHz FSB Northwood Details: A few months ago my motherboard died, so I bought a used computer that had a 2.4GHz Celeron. My old system had a 1.7GHz Pentium 4, so now I’m trying to decide which CPU to use. Obviously a P4 is preferable over a Celeron, but the Celeron is (significantly?) faster than the P4. I’m wondering if the faster Celeron might be better for certain tasks (ie, stronger but dumber is better at some things than smarter but weaker). I tried Googling for some reviews and comparisons for graphs to get a clear depiction of which is better overall, but found nothing that helped. (I did manage to find one page that indicates (apparently by poll, not benchmark) that the Celeron is better.) So which CPU should I use? Does anyone know of some graphs that I can use to compare the two? The system is a general-purpose machine for word-processing, Internet, and casual games (not Crysis, but not Solitaire either). It will be running Windows XP. The board is a 478 with 400MHz FSB.

    Read the article

  • Problem using a public key when connecting to a SSH server running on Cygwin

    - by Deleted
    We have installed Cygwin on a Windows Server 2008 Standard server and it working pretty well. Unfortunately we still have a big problem. We want to connect using a public key through SSH which doesn't work. It always falls back to using password login. We have appended our public key to ~/.ssh/authorized_keys on the server and we have our private and public key in ~/.ssh/id_dsa respective ~/.ssh/id_dsa.pub on the client. When debugging the SSH login session we see that the key is offered by the server apparently rejects it by some unknown reason. The SSH output when connecting from an Ubuntu 9.10 desktop with debug information enabled: $ ssh -v 192.168.10.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /home/myuseraccount/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for debug1: Connecting to 192.168.10.11 [192.168.10.11] port 22. debug1: Connection established. debug1: identity file /home/myuseraccount/.ssh/identity type -1 debug1: identity file /home/myuseraccount/.ssh/id_rsa type -1 debug1: identity file /home/myuseraccount/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.10.11' is known and matches the RSA host key. debug1: Found key in /home/myuseraccount/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/myuseraccount/.ssh/id_dsa debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myuseraccount/.ssh/identity debug1: Trying private key: /home/myuseraccount/.ssh/id_rsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password [email protected]'s password: The version of Cygwin: $ uname -a CYGWIN_NT-6.0 servername 1.7.1(0.218/5/3) 2009-12-07 11:48 i686 Cygwin The installed packages: $ cygcheck -c Cygwin Package Information Package Version Status _update-info-dir 00871-1 OK alternatives 1.3.30c-10 OK arj 3.10.22-1 OK aspell 0.60.5-1 OK aspell-en 6.0.0-1 OK aspell-sv 0.50.2-2 OK autossh 1.4b-1 OK base-cygwin 2.1-1 OK base-files 3.9-3 OK base-passwd 3.1-1 OK bash 3.2.49-23 OK bash-completion 1.1-2 OK bc 1.06-2 OK bzip2 1.0.5-10 OK cabextract 1.1-1 OK compface 1.5.2-1 OK coreutils 7.0-2 OK cron 4.1-59 OK crypt 1.1-1 OK csih 0.9.1-1 OK curl 7.19.6-1 OK cvs 1.12.13-10 OK cvsutils 0.2.5-1 OK cygrunsrv 1.34-1 OK cygutils 1.4.2-1 OK cygwin 1.7.1-1 OK cygwin-doc 1.5-1 OK cygwin-x-doc 1.1.0-1 OK dash 0.5.5.1-2 OK diffutils 2.8.7-2 OK doxygen 1.6.1-2 OK e2fsprogs 1.35-3 OK editrights 1.01-2 OK emacs 23.1-10 OK emacs-X11 23.1-10 OK file 5.04-1 OK findutils 4.5.5-1 OK flip 1.19-1 OK font-adobe-dpi75 1.0.1-1 OK font-alias 1.0.2-1 OK font-encodings 1.0.3-1 OK font-misc-misc 1.1.0-1 OK fontconfig 2.8.0-1 OK gamin 0.1.10-10 OK gawk 3.1.7-1 OK gettext 0.17-11 OK gnome-icon-theme 2.28.0-1 OK grep 2.5.4-2 OK groff 1.19.2-2 OK gvim 7.2.264-1 OK gzip 1.3.12-2 OK hicolor-icon-theme 0.11-1 OK inetutils 1.5-6 OK ipc-utils 1.0-1 OK keychain 2.6.8-1 OK less 429-1 OK libaspell15 0.60.5-1 OK libatk1.0_0 1.28.0-1 OK libaudio2 1.9.2-1 OK libbz2_1 1.0.5-10 OK libcairo2 1.8.8-1 OK libcurl4 7.19.6-1 OK libdb4.2 4.2.52.5-2 OK libdb4.5 4.5.20.2-2 OK libexpat1 2.0.1-1 OK libfam0 0.1.10-10 OK libfontconfig1 2.8.0-1 OK libfontenc1 1.0.5-1 OK libfreetype6 2.3.12-1 OK libgcc1 4.3.4-3 OK libgdbm4 1.8.3-20 OK libgdk_pixbuf2.0_0 2.18.6-1 OK libgif4 4.1.6-10 OK libGL1 7.6.1-1 OK libglib2.0_0 2.22.4-2 OK libglitz1 0.5.6-10 OK libgmp3 4.3.1-3 OK libgtk2.0_0 2.18.6-1 OK libICE6 1.0.6-1 OK libiconv2 1.13.1-1 OK libidn11 1.16-1 OK libintl3 0.14.5-1 OK libintl8 0.17-11 OK libjasper1 1.900.1-1 OK libjbig2 2.0-11 OK libjpeg62 6b-21 OK libjpeg7 7-10 OK liblzma1 4.999.9beta-10 OK libncurses10 5.7-18 OK libncurses8 5.5-10 OK libncurses9 5.7-16 OK libopenldap2_3_0 2.3.43-1 OK libpango1.0_0 1.26.2-1 OK libpcre0 8.00-1 OK libpixman1_0 0.16.6-1 OK libpng12 1.2.35-10 OK libpopt0 1.6.4-4 OK libpq5 8.2.11-1 OK libreadline6 5.2.14-12 OK libreadline7 6.0.3-2 OK libsasl2 2.1.19-3 OK libSM6 1.1.1-1 OK libssh2_1 1.2.2-1 OK libssp0 4.3.4-3 OK libstdc++6 4.3.4-3 OK libtiff5 3.9.2-1 OK libwrap0 7.6-20 OK libX11_6 1.3.3-1 OK libXau6 1.0.5-1 OK libXaw3d7 1.5D-8 OK libXaw7 1.0.7-1 OK libxcb-render-util0 0.3.6-1 OK libxcb-render0 1.5-1 OK libxcb1 1.5-1 OK libXcomposite1 0.4.1-1 OK libXcursor1 1.1.10-1 OK libXdamage1 1.1.2-1 OK libXdmcp6 1.0.3-1 OK libXext6 1.1.1-1 OK libXfixes3 4.0.4-1 OK libXft2 2.1.14-1 OK libXi6 1.3-1 OK libXinerama1 1.1-1 OK libxkbfile1 1.0.6-1 OK libxml2 2.7.6-1 OK libXmu6 1.0.5-1 OK libXmuu1 1.0.5-1 OK libXpm4 3.5.8-1 OK libXrandr2 1.3.0-10 OK libXrender1 0.9.5-1 OK libXt6 1.0.7-1 OK links 1.00pre20-1 OK login 1.10-10 OK luit 1.0.5-1 OK lynx 2.8.5-4 OK man 1.6e-1 OK minires 1.02-1 OK mkfontdir 1.0.5-1 OK mkfontscale 1.0.7-1 OK openssh 5.4p1-1 OK openssl 0.9.8m-1 OK patch 2.5.8-9 OK patchutils 0.3.1-1 OK perl 5.10.1-3 OK rebase 3.0.1-1 OK run 1.1.12-11 OK screen 4.0.3-5 OK sed 4.1.5-2 OK shared-mime-info 0.70-1 OK tar 1.22.90-1 OK terminfo 5.7_20091114-13 OK terminfo0 5.5_20061104-11 OK texinfo 4.13-3 OK tidy 041206-1 OK time 1.7-2 OK tzcode 2009k-1 OK unzip 6.0-10 OK util-linux 2.14.1-1 OK vim 7.2.264-2 OK wget 1.11.4-4 OK which 2.20-2 OK wput 0.6.1-2 OK xauth 1.0.4-1 OK xclipboard 1.1.0-1 OK xcursor-themes 1.0.2-1 OK xemacs 21.4.22-1 OK xemacs-emacs-common 21.4.22-1 OK xemacs-sumo 2007-04-27-1 OK xemacs-tags 21.4.22-1 OK xeyes 1.1.0-1 OK xinit 1.2.1-1 OK xinput 1.5.0-1 OK xkbcomp 1.1.1-1 OK xkeyboard-config 1.8-1 OK xkill 1.0.2-1 OK xmodmap 1.0.4-1 OK xorg-docs 1.5-1 OK xorg-server 1.7.6-2 OK xrdb 1.0.6-1 OK xset 1.1.0-1 OK xterm 255-1 OK xz 4.999.9beta-10 OK zip 3.0-11 OK zlib 1.2.3-10 OK zlib-devel 1.2.3-10 OK zlib0 1.2.3-10 OK The ssh deamon configuration file: $ cat /etc/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh_host_rsa_key #HostKey /etc/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes StrictModes no #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no #PrintMotd yes #PrintLastLog yes TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/sbin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs #X11Forwarding yes #AllowTcpForwarding yes #ForceCommand cvs server I hope this information is enough to solve the problem. In case any more is needed please comment and I'll add it. Thank you for reading!

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • Issues with ASP.NET via Apache/mod_mono on Ubuntu.

    - by Matthew Scharley
    I run an Ubuntu test server, and my deployment system is also Ubuntu. I've recently been trying to get ASP.NET to work on my test server so that we can take it live. I managed to get it installed, and configured properly, and my application is installed and running, but I can't get anything to work. The error I keep receiving is below, if anyone has any clue what might be going on, it would be greatly appreciated. Server Error in '/' Application Standard output has not been redirected or process has not been started. Description: HTTP 500. Error processing request. Stack Trace: System.InvalidOperationException: Standard output has not been redirected or process has not been started. at System.Diagnostics.Process.CancelErrorRead () [0x00000] at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:CancelErrorRead () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] Version information: Mono Version: 2.0.50727.42; ASP.NET Version: 2.0.50727.42 Apache version String: Apache/2.2.11 (Ubuntu) mod_mono/2.0 PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch Server at dev Port 80 PS: I had to add three DLL's to the /bin directory in my application, copying them from Windows because I couldn't find them in any of Mono's packages. This might or might not be causing problems, I don't know. The list that I had to add is: System.Web.Abstractions System.Web.Routing System.Web.Mvc

    Read the article

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >