Search Results

Search found 57672 results on 2307 pages for 'caching application block'.

Page 686/2307 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • How to use DNS/Hostnames or Other ways to resolve to a specific IP:Port

    - by tomaszs
    This is a Canonical Question about DNS/Hostnames resolution to IPs/Ports Example 1 I'm running a web server on port 80 and another on port 87. I would like to use DNS so that www.example.com goes to port 87. How can I accomplish this using DNS only? Example 2 I'm running a service on my server on a non-standard port. How can I get clients to connect to this non-standard port automatically? Can I use DNS? Is there some application specific support where DNS could indicate the IP and Port? Example 3 Do some application protocols specifically support hostname awareness, and allow special actions to be taken based on this information? Are there other questions on Server Fault that cover some of these? Commandeering: This question was originally asking about running IIS and Apache on the same server, but the same concepts can be applied to any server software receiving connections from clients. The Answers below describe the technical problems and solutions of using DNS and application protocol support to assign a port number for a client to connect.

    Read the article

  • SQLRelay MySQL compatibility layer in php-cgi.

    - by sybreon
    I am investigating the use of sqlrelay as a middle-layer between an application that uses MySQL with a PostgreSQL backend. I assume that this is something that it can do to ease backend migration. But for the moment, I am just experimenting with a MySQL application accessing a MySQL backend through the sqlrelay layer. app => sqlrelay lib => mysql client lib => tcp => mysql server I followed the instructions for the MySQL drop-in replacement and it works. I can connect to the backend MySQL server using both sqlrsh and mysql client application. It will work for most MySQL applications by using LD_PRELOAD with the compatibility layer library. The instructions recommend re-compiling php to support it. I would prefer not doing something so drastic. They also recommend setting the LD_PRELOAD for apachectl as a method for the apache/php stack. However, this does not work with lighttpd/php-cgi. I have wrapped php-cgi with a shell script that sets LD_PRELOAD before running the cgi script. LD_PRELOAD=/usr/lib/libmysql50sqlrelay-0.39.4.so.1 /usr/bin/php5-cgi $@ I can see LD_PRELOAD correctly set in phpinfo() but the cgi scripts all fail and are unable to connect to the database. According to the mysql client, the compatibility library should report itself as a 5.0.0 client but the phpinfo module reports itself as the actual 5.0.51a client library used. This means that the compatibility library was not used. Is there someone who has had some success doing something similar?

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • Ubuntu root privs installation issue

    - by Pam
    I am a fairly new Ubuntu user (and Linux user, for that matter) and I just downloaded a program whose installer was a .sh file. Not thinking, I copied the installer to an /opt subdirectory, thinking that I was going to install the application there: sudo cp ~/Downloads/fooInstaller.sh /opt/someDir I can't remember, but I either had to use sudo because /opt required it, or I just used it without thinking, but in any case, I prefixed with sudo. Once in /opt/someDir, I executed the installer again, using sudo: sudo sh fooInstaller.sh The terminal went crazy, and a few seconds later, a graphical install wizard popped up that guided me through the rest of the process. At the end of the wizard I was prompted to launch the program, and I did, and everything was great. Until... I closed the program, and attempted to add it to my Ubuntu "panel" (the icon panel at the top of the screen). The program was installed to /usr/local/foo/theProgram, and so I specified that URL as the command in the custom app launcher. When I open the program through the panel/launcher (at the top of the screen), the program doesn't load or operate correctly. I get a lot of error messages complaining about being denied permissions. I'm assuming that this is a "superuser/installation/privs" issue, and not a problem with the application (hence this post at superuser.com instead of the application's forums), because when I launch the program from the terminal with sudo, it opens and executes perfectly fine, just like it did the first time around after the install wizard finished. I realize I'm probably going to have to uninstall the program completely, and re-install it differently. Finally, my question: After uninstalling, can I avoid all these issue by just running the installer (sh fooInstaller.sh) right out of my Downloads directory, sans the sudo prefix? If not, how do I get the program to install without root privs so that I can add it to my panel/launcher and get it executing correctly? Sorry for the long post but I didn't want to omit any details because, as I'm sure you can tell, I'm not really sure I know what I'm doing. Thanks for any help here!

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • How do I boot [embedded] linux from sd card?

    - by Brandon Yates
    I am hacking together a quick embedded linux system on a DM816x evm board. Previously I have been using TFTP and NFS to load my kernel and root filesystem to the board. I am now trying to switch over to loading everything from an SD card. I have my card partitioned such that uBoot and my kernel image are in one partition, and my rootFS in another partition. At power-on, Uboot starts correctly and successfully launches the kernel. However, the kernel is unable to mount the root file system. It appears that it doesn't recognize any SD (mmc) cards. It gives this error message. VFS: Cannot open root device "mmcblk0p2" or unknown-block(2,0) Please append a correct "root=" boot option; here are the available partitions: 1f00 256 mtdblock0 (driver?) 1f01 8 mtdblock1 (driver?) 1f02 2560 mtdblock2 (driver?) 1f03 1272 mtdblock3 (driver?) 1f04 2432 mtdblock4 (driver?) 1f05 128 mtdblock5 (driver?) 1f06 4352 mtdblock6 (driver?) 1f07 204928 mtdblock7 (driver?) 1f08 50304 mtdblock8 (driver?) Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0) I feel like I'm missing something fundamental here. Why does it not recognize the root device I am trying to load from? Here is my uBoot boot script that is running: setenv bootargs console=ttyO2,115200n8 root=/dev/mmcblk0p2 rw mem=124M earlyprink vram=50M ti816xfb.vram=0:16M,1:16M,2:6M ip=off noinitrd;mmc init;fatload mmc 1 0x80009000 uImage;bootm 0x80009000

    Read the article

  • TFS 2012: Backup Plan Fails with empty log file

    - by Vitor
    I have a Team Foundation Server 2012 installation with Power Tools, and I defined a backup plan using the wizard found in the "Database Backup Tools" in the Team Foundation Server Administration Console. I set the backup plan to do a full database backup on Sunday mornings, to another server in the network. I followed the wizard with no problems and the Backup Plan was set successfully. However when the backup runs it returns Error as result and when I go to the log file I only get the header and no further info: [Info @01:00:01.078] ==================================================================== [Info @01:00:01.078] Team Foundation Server Administration Log [Info @01:00:01.078] Version : 11.0.50727.1 [Info @01:00:01.078] DateTime : 11/25/2012 02:00:01 [Info @01:00:01.078] Type : Full Backup Activity [Info @01:00:01.078] User : <backup user> [Info @01:00:01.078] Machine : <TFS Server> [Info @01:00:01.078] System : Microsoft Windows NT 6.2.9200.0 (AMD64) [Info @01:00:01.078] ==================================================================== I can imagine it's a permission problem, but I have no idea where to start ... Can anyone help? Thank you for your time! EDIT I'm not sure if it is related, but I logged in with "backup user" in "TFS Server" and there was this crash window opened with "TFS Power Tool Shell Extension (TfsComProviderSvr) has stopped working". The full crash log is here: Problem signature: Problem Event Name: APPCRASH Application Name: TfsComProviderSvr.exe Application Version: 11.0.50727.0 Application Timestamp: 5050cd2a Fault Module Name: StackHash_e8da Fault Module Version: 6.2.9200.16420 Fault Module Timestamp: 505aaa82 Exception Code: c0000374 Exception Offset: PCH_72_FROM_ntdll+0x00040DA8 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1043 Additional Information 1: e8da Additional Information 2: e8dac447e1089515a72386afa6746972 Additional Information 3: d903 Additional Information 4: d9036f986c69f4492a70e4cf004fb44d Does it help? Thanks everyone!

    Read the article

  • Excessive PHP errors in Joomla

    - by Rodnower
    I have Joomla 2.5 installed on Windows 7 with Apache 2 and PHP 5. I have countless PHP errors in the log like the following: [01-Sep-2012 19:33:55 UTC] PHP Strict standards: Only variables should be assigned by reference in C:\ammon_dev\ammon\plugins\system\jquery\jquery.php on line 24 [01-Sep-2012 19:33:55 UTC] PHP Stack trace: [01-Sep-2012 19:33:55 UTC] PHP 1. {main}() C:\ammon_dev\ammon\administrator\index.php:0 [01-Sep-2012 19:33:55 UTC] PHP 2. JAdministrator->route() C:\ammon_dev\ammon\administrator\index.php:40 [01-Sep-2012 19:33:55 UTC] PHP 3. JApplication->triggerEvent() C:\ammon_dev\ammon\administrator\includes\application.php:106 [01-Sep-2012 19:33:55 UTC] PHP 4. JDispatcher->trigger() C:\ammon_dev\ammon\libraries\joomla\application\application.php:670 [01-Sep-2012 19:33:55 UTC] PHP 5. JEvent->update() C:\ammon_dev\ammon\libraries\joomla\event\dispatcher.php:161 [01-Sep-2012 19:33:55 UTC] PHP 6. call_user_func_array() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 [01-Sep-2012 19:33:55 UTC] PHP 7. plgSystemJquery->onAfterRoute() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 I tried disabling error logging in php.ini: error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT Unfortunately that does not make a difference. Joomla isn’t in debug mode, and I am sure that I’m editing the correct copy of php.ini because other changes I make to it take effect. Any ideas why I am getting so many errors or how to stop it from exploding the log?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • IIS doesn't serve certain file extensions

    - by Alekc
    Hi, i have this weird issue on Win 2k3 server and IIS: Iis has several sites, in one of them i need to create a subdir and set up it as web application. I've noticed that if i create new directory and put some .js/.txt file into it, they will not be served by iis (IE gives an error Internet Explorer cannot display the webpage). If i put the same files in another old site's subdirectory it will show correctly. By sniffing traffic i've seen that iis reply connection state 200 and then drop completely any connection http://domain.com/test2/prova.txt GET /test2/prova.txt HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.x 200 OK If i rename file prova.txt in prova.asp for example it showing without problems so it shouldn't be permissions issue. After making some researches I've found out that it can be caused by missing mime types, I've checked out .txt and .js are present and served by aspnet_isapi.dll. And here comes another weird thing: if i remove mime mapping from directory's properties it's served correctly, but the same thing doesn't work with js. I'm really beginning to be out of ideas, is there someone who have some hint? Thanks in advance.

    Read the article

  • Unable to send mail to hotmail from rackspace cloud

    - by Jo Erlang
    I'm having issue sending mail from postfix on a rackspace cloud instance for my domain. Hotmail says "550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. " Here is the mail log Sep 20 08:02:59 mydomain postfix/smtpd[1810]: disconnect from localhost[127.0.0.1] Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: to=<[email protected]>, relay=mx3.hotmail.com[65.55.92.184]:25, delay=0.19, delays=0.1/0.01/0.06/0.01, dsn=5.0.0, status=bounced (host mx3.hotmail.com[65.55.92.184] said: 550 SC-001 (SNT0-MC4-F35) Unfortunately, messages from 198.101.x.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command)) Sep 20 08:02:59 mydomain postfix/smtp[1814]: 59CFF4B191: lost connection with mx3.hotmail.com[65.55.92.184] while sending RCPT TO I have implemented rDNS, SPF and DKIM they all are looking fine. I have checked my IP and domain, on most of the spam black lists and it is listed as ok on those, (not listed as spamming IP) What should I try next?

    Read the article

  • Win 7: apps crash, then explorer crashes, then services fail, then boom

    - by snorfys
    Periodically, every 2-3 days one of my systems will go haywire: every app will crash search will fail via the start menu and then explorer will fail. Restarting explorer via taskmanager will cause it to fail again, then it'll BSOD and restart. The eventlog for when this happens goes something like this every time: ERROR: Session "ReadyBoot" stopped due to the following error: 0xC0000188 (supposedly not a problem) WARNING: The maximum file size for session "ReadyBoot" has been reached... (forget where I found out, but also 'not a problem') ERROR: Session "Circular Kernel Context Logger" stopped due to the following error: 0xC0000188 (again, supposedly not a problem) WARNING: The maximum file size for session "Circular Kernel Context Logger" has been reached... ERROR: Faulting application name: Explorer.EXE, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: explorer.exe, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: svchost.exe_iphlpsvc, version: 6.1.7600.16385, time stamp:... ERROR: The Service Name service terminated unexpectedly. It has done this 1 time(s) That last one happens a number of times but with a different service name. Then finally we have: ERROR: The Service Control Manager tried to take a corrective action (Restart the service) after the unexpected termination of the Server service, but this action failed with the following error: An instance of the service is already running. After that, I have my BSOD and logs complaining that windows started up without shutting down. It's a new machine: Intel i3 530 4gb RAM (Ran memtest for 4 hrs, no problems) 320GB WD/250GB Seagate HDDs (Happened on fresh installs on 2 separate HDDs) Win7 Pro/Ultimate x64 (wife's copy of pro, my copy of ult, no change) Fresh install + driver and windows update (happened without updates as well) I'm at a bit of a loss as to what I can look at next. Especially since it'll work like a charm for 2-3 days and then it's hooped for a night (I'm on it now in fact - no problems).

    Read the article

  • Windows Error Reporting and IIS7 on Windows Server 2008

    - by graffic
    In a windows webservere I'm trying to get a memory dump of a failing IIS 7 worker process (w3wp.exe) with no avail. In the Event Viewer I get the following. Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000005c22 Faulting process id: 0x1cac Faulting application start time: 0x01cc23419da54772 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: b54ec4f8-8fa4-11e0-ab62-005056810035 Even if I've configured LocalDumps for WER, and specifically for w3wp.exe in the registry. I get another event telling me that there is a report here: *C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cdb8af6deb381574fe9fb0dc9aa3edaad59acd5f_cab_4fbf9b53* It contains the following files: WERD931.tmp.appcompat.txt WERDFE9.tmp.WERInternalMetadata.xml WER99EF.tmp.WERDataCollectionFailure.txt The "depressing one" is the WERDataCollectionFailure that says: Heap dump generation failed: 0x8007012b Mini dump generation failed: 0x8007001f After many tries, lots of msdn documentation and many failed google search. I'm out of ideas on how to get a dump here. Does anyone have any suggestion on how to make WER work? Thank you in advance for your time reading this :)

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • RPM issues after signing JDK 1.6 64-bit

    - by organicveggie
    I'm trying to sign the Java JDK 1.6u21 64-bit RPM on CentOS 5.5 for use with Spacewalk and I'm running into problems. It seems to sign okay, but then when I check the signature it seems to be missing the key I just used to sign it. Yet RPM shows the key in it's list... # rpm --addsign jdk-6u21-linux-amd64.rpm Enter pass phrase: Pass phrase is good. jdk-6u21-linux-amd64.rpm: gpg: WARNING: standard input reopened gpg: WARNING: standard input reopened # rpm --checksig -v jdk-6u21-linux-amd64.rpm jdk-6u21-linux-amd64.rpm: Header V3 DSA signature: NOKEY, key ID ecfd98a5 MD5 digest: OK (650e0961e20d4a44169b68e8f4a1691b) V3 DSA signature: OK, key ID ecfd98a5 Yet I have the key imported (edited for privacy): # rpm -qa gpg-pubkey* |grep ecfd98a5 gpg-pubkey-ecfd98a5-4caa4a4c # rpm -qi gpg-pubkey-ecfd98a5-4caa4a4c Name : gpg-pubkey Relocations: (not relocatable) Version : ecfd98a5 Vendor: (none) Release : 4caa4a4c Build Date: Mon 04 Oct 2010 10:20:49 PM CDT Install Date: Mon 04 Oct 2010 10:20:49 PM CDT Build Host: localhost Group : Public Keys Source RPM: (none) Size : 0 License: pubkey Signature : (none) Summary : gpg(FirstName LastName <[email protected]>) Description : -----BEGIN PGP PUBLIC KEY BLOCK----- Version: rpm-4.4.2.3 (NSS-3) ...key goes here... =gKjN-----END PGP PUBLIC KEY BLOCK----- And I'm definitely running a 64-bit version of CentOS: # uname -a Linux spacewalk.mycompany.corp 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux Without a valid signature, Spacewalk refuses to install the RPM unless I completely disable signature checking. I have tried this with two different keys and two different users on the same machine without any success. Any bright ideas?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • Problem with Windows Service and network printers.

    - by Mohammadreza
    I have a Windows Service application that every now and then should print some documents. As far as I know, to print those documents, my service must be run with a user account other than Local Service or Network Service. So i have created a user account and added that to the Administrators group and ran the service with it. With locally installed printers, I don't have any problems because those printers are automatically installed for all accounts. To be able to print with the network printers, I have created another application that syncs the installed printers of the currently logged in user with the user account that my service uses with the rundll32.exe printui.dll,PrintUIEntry command. In Vista and Windows7 I don't have any problems with the syncing of the printers because every time that a printer should be installed the authentication window will open and it asks for the appropriate user account to install that printer (The service user account is not created on the network printers computers) but in XP a find dialog with the "Connecting to {printername}" caption will appear and stops responding, or sometimes it installs the printer but every time service tries to print, a Win32Exception with "A StartDocPrinter call was not issued" message will throw and in the user account that runs the sync application a duplicate printer will be shown and I couldn't delete those printers unless with force (using registry). Am I doing the right thing for printing documents with Windows Services at all? If yes, how can I solve the above-mentioned problem? And if not, what the heck should I do? Thanks.

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • Create "raw disk file" from WIM file

    - by Joe Baltimore
    First timer here. I've searched around here, but haven't found a question like the one I have. Apologies if I missed it. The challenge at hand: produce a "raw disk image file" from a given WIM file. What I am pursuing so far is to use imagex.exe with the "/apply" operation to take the WIM and lay it down in a directory on a server. That seems to produce all the necessary "stuff" I need in that directory. How would I take that content and produce a "raw disk image file"? I'm told the definition of "raw disk image file" is a block-by-block copy of the disk image, which I hope is the output of the "imagex.exe /apply" command I use currently, but stored in a single file I can hand back to another system in our solution. imagex.exe /apply image.wim 1 R:\WimImagePoint I would like to take the contents of R:\WimImagePoint and produce the elusive (to me) "raw disk image file". ISO is not what they want, nor is anything requiring winPE. Any pointers? External utilities' references are welcome. Would like to avoid unmanaged code solutions as much as possible, but will entertain them if that's the only route. Also, I am not married to the idea of imagex /apply as the starting point, it's just the comfort zone so far.

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >