Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 315/414 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • How do I install Apache Portable Runtime?

    - by apache
    My Apache is installed by yum install apache And now I'm trying to install subversion server from source following instructions here. But when I try to configure,get an error: [root@vps303 subversion-1.6.9]# ./configure configure: Configuring Subversion 1.6.9 configure: creating config.nice checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes ... checking for APR... no configure: WARNING: APR not found The Apache Portable Runtime (APR) library cannot be found. Please install APR on this system and supply the appropriate --with-apr option to 'configure' or get it with SVN and put it in a subdirectory of this source: svn co \ http://svn.apache.org/repos/asf/apr/apr/branches/1.2.x \ apr Run that right here in the top level of the Subversion tree. Afterwards, run apr/buildconf in that subdirectory and then run configure again here. Whichever of the above you do, you probably need to do something similar for apr-util, either providing both --with-apr and --with-apr-util to 'configure', or getting both from SVN with: svn co \ http://svn.apache.org/repos/asf/apr/apr-util/branches/1.2.x \ apr-util configure: error: no suitable apr found How do I get around this problem? BTW,will both client and server software be installed by compiling from source?

    Read the article

  • How to use non-free drivers during debian install

    - by blokeley
    I'm trying to install debian stable using unetbootin. The install process fails with "network autoconfiguration failed", probably due to the ethernet driver not working. My Lenovo U350 has a Broadcom BCM57780 which does not seem to be supported out-of-the-box: there are various bug reports here, here and here, but I don't know if the fix has made it into debian (6) stable. One discussion says that you have to use an ethernet driver from the firmware-linux-nonfree package. I'm not sure that this is correct because the BCM57780 is not in the list of drivers in firmware-linux-nonfree. The specific question tree is: Is BCM57780 supported in debian stable? If so, what could be wrong? Should I install debian unstable instead? If not, do I need to use firmware-linux-nonfree during installation and, if so, how do I do this? Please note: I've used ubuntu and debian loads in the past but please post line-by-line guidance rather than some cryptic abbreviation of any instructions. Thanks in advance for any help. Updates: Debian stable with non-free drivers did not work. Debian unstable (free drivers only) did not work. Tried loading firmware-iwlwifi_0.28_all.deb from another USB stick to get wireless working rather than BCM57780. The .deb file was found but the network configuration still failed! That's it, I'm giving up. Unfortunately I'll use ubuntu even though the Unity user interface will be very unstable for the next couple of years :(

    Read the article

  • Utilu IE Collection IE6 crashes under Windows 7

    - by Aron
    I've installed Utilu IE Collection under Windows 7 64-bit, but am unable to successfully run IE6 (specifically, "Internet Explorer 6.0 (6.00.2900.2180)"). I haven't tried other versions of IE from the package, so their behavior is unknown. It starts up fine, but the address bar isn't shown. (Attempted to post image, but I'm a newbie on SU and it wouldn't let me. Can be found here.) View-Toolbars shows 'Address Bar' already selected. If I try to (de-)select Address Bar in that list, it crashes: Problem signature: Problem Event Name: APPCRASH Application Name: iexplore.exe Application Version: 6.0.2900.2180 Application Timestamp: 41107b81 Fault Module Name: StackHash_e162 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Code: c0000005 Exception Offset: 00000000 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: e162 Additional Information 2: e1625455580a00d3fda64a8cdf366fbb Additional Information 3: 58c7 Additional Information 4: 58c75753c6745fad91929ab6430a83fa That's not all that's wrong ('Search' pane has no contents as well, probably more), but it's the most glaring problem. I've seen some allusions to a registry change needed to fix this, but haven't been able to find any details. Anybody? I'm running the latest version of the collection, 1.7.0.6.

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Find out what fonts are being sent to a printer

    - by user38307
    I have an issue where two computers running XP and with identical print drivers have different behavior printing over parallel port to receipt printers. For one type of receipt, receipt printing is instant. For another kind printing is delayed by ten seconds on most machines but not on the other. This happens even if I swap out printers. I believe the delay is because this computer has a different set of fonts installed. (It is used for graphic design.) The printers have built-in fonts, and if you do not use one of the built-in fonts the printer has to build up an image in memory rather than just spitting out its fonts. For a particular kind of receipt with special fonts on a particular computer the computer is sending a font which the receipt printer does not have built in. My question is, is there a way to find out what fonts are being sent to the printer? This would let me narrow down what I need to modify in the Windows font folder. Thank you!

    Read the article

  • How do I find and kill a php loop (process)?

    - by Hoytman
    I have a php script that I have been developing which calls an external api within a loop. It is being tested on a VPS which is running LAMP on Debian. I noticed this morning that the api was not responding to my script. When I called the provider, they told me that my server had been calling the api 1000's of times per hour for the past 10 hours. I am assuming that a php script (which I have been working on the day before and testing on my VPS) entered into an infinite loop during one of the executions, and never came out (I have been testing it from the command prompt, and not over the web.) I have attempted to stop and start Apache, but the api support staff says that the calls are still coming in from my server address. How can I find and stop the process? Also, is there a possibility that the Apache stop/start solved the problem, but the api is still trying to sort through past calls? Please forgive me for not using my local test environment correctly. Edit: I do not know the process name, I need to discover the name (or pid) based on behavior.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • How can I get write permission for the Web (Inetpub) directory on a new Win 7 machine?

    - by marcipollo
    I mirror my Web site on my laptop, and am trying to move the mirror site to a new laptop. I copied the files to the Inetpub directory, and can view them perfectly, but they are read-only (the check-mark is grey, not black), and I cannot change the permission. When I un-check the read-only attribute on the Inetpub directory, and click "apply" it displays a dialog box stating that I need administrative permission to change the attributes. (I am logged in as an administrator). When I click "continue," it pops up another dialog box saying access is denied to the attributes of the file: c:\inetpub\custerr\en-us\500-100.asp That dialog box has an "ignore" button, and if I click that, it appears to work through the directory tree setting the permissions. It leaves all of the files (leafs) set to "read-write," but the directories remain "read only." I am using 64-bit Windows 7. I stopped the IIS service while doing all of this. Might it have something to do with the fact that I copied the files from a different machine in the workgroup (my old laptop)?

    Read the article

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • ASP.NET, IS7 and IE8 caching?

    - by jdege
    We're suddenly having problems with some of our sites having old versions of .css and .js files show up in the browser. Generally, these problems go away, when the user clears cache in the browser. Is there something we can do either in the code or in IIS7, to convince the browser to not used the cached files? In our weirdest case, we have one customer whose users hit our site, and get an old version of a js file. They clear cache, load the page, get the current version, and the page runs fine. Then they load the file again, and suddenly have the old version, again. Any ideas as to how that might be happening? I can think of three: The browser is somehow holding on to the old version, when we clear cache, and is putting it back in the cache, before the second page load. One of our servers has an old version of the file, and while the first page load after a clear cache pulls it from one of the servers with the current version, second and subsequent page loads pull it from the server that has the old version. The first load after a clear cache goes straight to our servers, while subsequent loads pull the file from the cache on the customer's web proxy. I have to say, all three of those scenarios seem outlandishly unlikely, but it's a repeatable behavior. Any ideas?

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • Windows Displays Double the Actual Installed Physical Memory

    - by Andrew Barber
    I have a server I've installed Windows Web 2008 R2 on, which is reporting that I have double the physical memory installed as is actually the case. In msinfo32 "Installed Physical Memory" shows as 2x what ever the actual installed amount is, though "Total Physical Memory" shows the correct amount. The "System" info window shows installed memory as 2x, with the correct amount in parenthesis listed as the "usable" amount). This server mistakenly had Windows Web 2008 (32-bit) installed on it just previously, and that OS also reported the same faulty information as Win2K8R2 is reporting. BIOS reports the correct amount, memtest was run on this server before installation, and a previous Windows 2000 instance installed on this system also reported the correct amount, as I recall. Server operation seems to be fine as well (it's only trying to use the correct amount of memory). The server is a generic pizzabox running on a SuperMicro X6DVL-EG with dual Xeon-3.2's. Memory installed are 4 matching mt18vddf12872g-335c3 sticks (1GB pc2700 DDR ECC REG cl2.5) This behavior occurs whether two or all four are installed. So, has anyone seen something like this before? Have any idea about what's causing it, and how I should be concerned about it? Everything else seems good so far, and I'll be upgrading the memory before putting the server into service, but I don't want to spend too much time/money/effort on the server if it's got something odd going wrong here. UPDATE: There was a question I ran into regarding memory sparing in the BIOS and a possible (buggy) effect thereof; however, flipping that bit back and forth in the BIOS revealed that isn't the issue. Still flummoxed a bit about this one, though I still have seen no negative impacts. Post-Answer Update (January 13, 2011): Upgrading the system with new, larger memory has fixed this issue.

    Read the article

  • External HDD connecting via USB disconnects wireless LAN connection

    - by Kensai
    Strange problem. I have this MEDION Akoya PC that has a dedicated bay to slide an external HDD sold separately. It's very handy indeed cause the slot is providing a fast USB 3 connection and power to the HDD unit, without extra cables. All works fine except this show-stopper behavior to disconnect me from the router once I slide in the unit and it powers up. The moment I connect the unit the (normally) three-four WiFi connections I see in my neighborhood disappear and my own to the router loses its signal strength (no Internet traffic is possible). After a while it throws me off that one as well, never to connect me again as long as the unit is powered. Once I disconnect the HDD the various signals come back and it automatically reconnects to my own. What takes? Are we in front of a serious design fault by MEDION here? Does the spinning of the HDD on top of the PC cause electromagnetic interference strong enough to throw off my WiFi connectivity? Is it a simple USB problem? Some kind of strange hardware conflict? Where should I look?

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Load Balancing is unusual in Apache in round-robin mode when one of the tomcat is brought down

    - by srayker
    We are facing a unusual behavior with round-robin load balancing on apache when one of the tomcat server is brought down. Our Setup: we have 2 apache web servers on the front end using mod_jk module for load balancing using round robin for load distribution. We also have enabled session stickyness. This is followed by 4 tomcat servers on which the applications are running. Sometimes under heavy load, if there is a slowness in our DB tier we find that eventually one of the tomcat goes into a hung state and would need a restart. The moment we bounce the tomcat we see a spike in requests in one of the server and this would also go into hung state and need a restart. Eventually all the server will face similar condition. What beats me is why does the Apache transfer the whole load to one server instead of distributing the load. We are now trying the worker.balancer.method=B to see if this helps to resolve our issue. Any thoughts on resolving the issue is greatly appreciated. regards, srayker

    Read the article

  • Applications are being opened by IE instead of running normally

    - by Star
    I rewrote the Question to add everything that i tried so far. Many of my applications are being opened by Internet Explorer. (not all) For example when I run Firefox.exe (from shortcut) I get IE run instead, with the following URL http: // %22d/ Browser/firefox.exe%22 (I added spaces to prevent link creation) the shortcut target is: "D:\Browser\firefox.exe" when I attempted to open firefox.exe from it's folder the results were the same as the previous one I attempted to open it by cmd, so i navigated with cmd to the FF path then wrote: firefox.exe the was the same except that the URL was: http: // Firefox.exe/ when i jsut write firefox the result URL was: http: // Firefox/ (is it some kind of parameter or something??) trying the same with chrome resulted the same results as the previous tests. I tried creating a new user (adminstartor) but the problem still there. I tried every registry key with exe on it (not sure if i tried them all) no change I tried removing IE but came back by itself somehow, meanwhile IE is removed, FF and its fellow apps gave me open with window I tried reinstalling the applications but it just no use. Time Line: (as requested from @Daredev) I don't know when it happened because the computer is for the company i work for and it was like that since i got it. (The IT there gave up on the problem lon time ago!). applications were installed already are "firefox" and "XPS viewer" . applications were working after the problem everything except what uses browsing (MS help viewer, XPS viewer, firefox-even I've re installed it-, opera, chrome) that what I thought but after installing Maxthon , comodoDragon this theory was blown away. system info: 1- windows xp professional service pack 3 2- system fully patched: Yes 3- anti-virus up to date: Yes 4- same behavior when booting into safe mode: Yes

    Read the article

  • Nagios: turn off service checks/display on down hosts

    - by Alien Life Form
    I want to to tweak nagios in such a way that all checking stops (with services not displayed, or displayed as unknown) for any down node. Said differently I only want to see one alert for a down host instead of 1 (down) + n (1 for every service). Note that I am interested in service display/status, not only in turning off notifications. Rationale: we use the nagios firefox/chrome plugin to monitor status and nagios' behavior is too noisy giving readings like these (because every node has 20 services): 3 down, 1 unreachable, 4 warnings, 87 critical This means that the 7 critical services on up node (the problem is on the service) are swamped in a slab of red services which are critical only because they sit on a node that's down/unreachable. What I'd rather like to see is: 3 down, 1 unreachable, 80 unknown, 4 warnings, 7 critical Or even 3 down, 1 unreachable, 4 warnings, 7 critical I have looked in service dependencies but I did not fine a way to describe: "make all services on a alive-host dependen on the status of the host check". I found the problem discussed here, where one of the participants thought it was a nagios bug, and here where one of the participants thought it was "as designed". As things are, I am just interested in the effect, much less in the design philosophy. Note that this nagios is checking hundreds of nodes, so the maintainablilty of the solution is also important. TIA and cheers.

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • Too many connections to Sql Server 2008

    - by Luis Forero
    I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) Windows Professional IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections, loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 • XP Mode Virtual Machine (Windows XP Professional) • IIS 5.1 It works fine, the behavior is the same, the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) • Windows Server 2008 R2 • IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions?

    Read the article

  • Notepad and Regedit does not launch from C:\Windows or C:\Windows\System32, but will from C:\Windows\SysWOW64

    - by lordhog
    My work machine was recently updated from Windows XP 64-bit to Windows 7 64-bit. I have found a few odd things that I can not figure out why they are occurring. If I type in Notepad.exe or RegEdit.exe from the Run diaglog the applications will not launch. When executing RegEdit.exe from the Run dialog the UAC does display asking for admin privileges, but the application never launches. Also, if I go to the Start menu and click on Notepad from there, Notepad never launches either. Other programs seem to be okay, like mspaint, wordpad, etc. Now, if I launch Notepad or RegEdit from C:\Windows\SysWOW64 then these applications will run, though I think these are the 32-bit version and not the 64-bit version. I am not sure what is going on. Has anyone seen this behavior before and how to fix it? I have Windows 7 64-bit at home and I don't have this issue. I even copied Notepad.exe from home and place it in the C:\ folder and tried to launch it from there and it still doesn't work. Regards, Mark

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >