Search Results

Search found 11331 results on 454 pages for 'resource monitor'.

Page 363/454 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • mysql cmd promt import data.sql

    - by udhaya
    i wanna import sql using cmd prompt. first open windows cmd prompt, navigate to xampp/mysql/bin folder & run mysql this error occurs D:\Program Files\xampp\mysql\bin>mysql ERROR 1045 (28000): Access denied for user 'ODBC'@'localhost' (using password: N O) D:\Program Files\xampp\mysql\bin>mysql -u root -p -h localhost dev1base < dev1b ase.sql Enter password: D:\Program Files\xampp\mysql\bin> D:\Program Files\xampp\mysql\bin>mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 104 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> mysql> -h localhost dev1base < dev1base.sql -> -> -> ->

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • Converting DisplayPort and/or HDMI to DVI-D?

    - by Jeff Atwood
    Newer Radeon video cards come with four ports standard: DVI (x2) HDMI DisplayPort If I want to run three 24" monitors, all of which are DVI only, from this video card -- is it possible to convert either the HDMI or DisplayPort to DVI? If so, how? And which one is easier/cheaper to convert? I did a little research and it looks like there isn't a simple "dongle" method. I found this DisplayPort to DVI-D Dual Link Adapter but it's $120; almost cheaper to buy a new monitor that supports HDMI or DisplayPort inputs at that point! There's also a HDMI to DVI-D adapter at Monoprice but I'm not sure it will work, either. AnandTech seems to imply that you do need the DisplayPort-to-DVI: The only catch to this specific port layout is that the card still only has enough TMDS transmitters for two ports. So you can use 2x DVI or 1x DVI + HDMI, but not 2x DVI + HDMI. For 3 DVI-derived ports, you will need an active DisplayPort-to-DVI adapter.

    Read the article

  • best-practices to block social sites

    - by adopilot
    In our company we have around 100 workstation with internet access, And day by day situation getting more worst and worst from perspective of using internet access for the purpose of doing private jobs, and wasting time on social sites. Open hearted I am not for blocking sites like Facebook, Youtube, and others similar but day by day my colleagues do not finishing his tasks and while I looking at their monitor all time they are ruining IE or Mozilla and chat and things like that. In other way Ill like to block youtube sometime when We have very poor internet access speed, Here is my questions: Do other companies blocking social sites ? Do I need dedicated device for that like hardware firewall, super expensive router Or I can do that whit my existing FreeBSD 6.1 self made router with two lan cards and configured nat to act like router. I was trying do that using ipfw and routerfirewall but without success, My code looks like ipfw add 25 deny tcp from 192.168.0.0/20 to www.facebook.com ipfw add 25 deny udp from 192.168.0.0/20 to www.facebook. ipfw add 25 deny tcp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny udp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny tcp from 192.168.0.0/20 to www.youtube. ipfw add 25 deny udp from 192.168.0.0/20 to www.youtube.com

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • How can I fix my vista PCs screen resolution and refresh rate

    - by Antony Scott
    I have a media PC running media portal hooked up to my HDTV via HDMI. The TV is a couple of years old now, so only supports 1080i, which is 1920x1080@25Hz. I've got it connected to my PC via a HDMI compatible AV receiver. If I power up the amp (wait for it to boot fully) followed by the TV| and finally the PC, all is well and I get a picture. If I deviate from that sequence, or don't wait for the amp to book up fully, or even switch the amp to another video input (for example, my PS3). The PC sees this and defaults the screen resolution/refresh rate to 1920x1080@60Hz. So, I end up with a blank screen. To fix this I have to use UltraVNC from a PC and change the refresh rate back to 25Hz. So, is there a way to turn off that auto detection, or to manually define what resolution/refresh rates the monitor can do. I'm using the on-board Radeon 3200 video and do not have any of the AMD software installed as it seems to cause problems with video playback. So, I'm looking for a native vista fix, or possible some 3rd party software.

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • Apache mod_proxy with SSL not redirecting

    - by simonszu
    I have a custom server running behind an apache reverse proxy. Since the custom server can only handle HTTP traffic, i am trying to use apache for wrapping proper SSL around it, and for some kind of HTTP authentication. So i enabled mod_proxy and mod_ssl and modified sites-available/default-ssl. The config is as following: <Location /server> order deny,allow allow from all AuthType Basic AuthName "Please log in" AuthUserFile /etc/apache2/htpasswd Require valid-user ProxyPass http://192.168.1.102:8181/server ProxyPassReverse http://192.168.1.102:8181/server </Location> The custom server is accessible from the internal network via the location specified in the ProxyPass directive. However, when the proxy is accessed from the outside, it presents the login prompt, and after successfully authenticated, i get a blank page with the words The resource can be found at http://192.168.1.102:8181/server. When i type the external URL again in an already authenticated browser instance, i am properly redirected to the server frontend. The access.log is full of entrys stating that my browser does successful GET requests, and the proxy is happily serving the /server ressource. However, the ressource isn't containing the server's frontend, but this blank page with these words on it.

    Read the article

  • Sniffing at work- How to detect

    - by coffeeaddict
    Because of the place I work has some real issues (people) especially in IT and the owner, I wonder if we are being sniffed. Is there any way to tell if on a Vista 64-bit machine: 1) In system logs some identification that would tell me that someone might log into my PC such as an Admin 2) Something in the logs that would give me a flag about maybe I'm being monitored some other way? 3) How can I be sure that my gmail, hotmail, and chat is not being sniffed. I know there are things like Simp, etc. I'm talking about specific hidden system signs either in registry or logs. Obviously I'm not going to raise any suspicion by me asking our network admin. I don't trust anyone at this company. is there a good way to basically monitor for this as an end user? Could someone log in and basically watch me work and if so, would there be any goodies left behind for me to find out if this has happened other than visual signs which would not be present...maybe some running processes?

    Read the article

  • What would cause an IIS6 website to be unavailable remotely randomly for a few minutes at a time?

    - by jskunkle
    Website is served by iis6 on windows server 2003. Never saw this problem once for months in beta. We made the new site live yesterday - its getting more traffic than in beta but not that much - resource utilization on the server and speed are fine. Today the site has been unavailable remotely a few (4?) times for a few minutes at a time. If you visit any page on the site - nothing is ever returned and eventually the request times out. While this is happening - I can connect to the server via remote desktop and the site loads fine from the live url when running a browser on the server locally. Other websites on the server continiue to function fine the entire time (using the same instance of iis, different app pools). Other computers on the same network can't access the website either. Other than not serving content - the server seems to behave normally - scheduled jobs in our custom job system continue to run, etc. We've looked at the iis logs quickly and we don't see any traffic out of the ordinary - no traffic spikes, etc. Any ideas? Thanks, Shane

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • vmware - ACE, Workstation - how to manage remoe clients??

    - by tom smith
    Hi. Exploring Vmware products/services and have a few questions. As I understand VM, you can use VMWare Workstation to create a VM of a target machine/box/OS. Let's call this VM, "foo". If I have 100 client PCs in my dept, and I want to install the VM (foo) on each client, and also manage the remote VM instances of (foo). How can I accompish this? Let's assume that the client machines are running Windows7, and have the vmplayer app installed on the box. I'm looking to do the following kinds of actions regarding the remote client machines: -Update the foo VM/image with new updated copies -Make sure that every VM "foo" has the same user, but a unique passwd -Monitor the traffic/status of each client VM "foo" oin each client -Start/Stop each client VN "foo" from the master console -Etc... Can this be accomplished? How would I do it, what services/products would I need? I've tried toalking to a few of the pre-sales guys in VMWare, and got nowhere, other than telling me to email my questions!! Looking at google shed more insight, but I still have questions. So, if you have detailed VMWare understanding, pointers to consultants, or resellers who can help, all pointers are greatly appreciated. Thanks -tom

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • How to get a higher resolution on Ubuntu 11.04 using an intel chipset

    - by Saif Bechan
    I have a bit slow PC here, so I decided to put Ubuntu 11.04 on it. It use to run Windows Vista on a resolution of 1280x1024, so both my hardware and monitor support it. Now I'm on Ubuntu, but can only run 1024x768, and the screen is not that bright. Its like when you don't have the right drivers on a Windows machine. Now i'm new to linux, so I do not know what do do. I have an onboard Intel chipset i965. Maybe this is some useful information, I read something about it on a forum: lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 02) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) 03:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Can someone please tell me how I can get the screen better? saif@sodium:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9

    Read the article

  • How to diagnose a hang when creating a new folder in explorer.exe

    - by Jack Ukleja
    I have been having some issues with explorer.exe hanging when I create a new folder. If I use Analyse Wait Chain in the Resource Monitor it says "One or more threads of explorer.exe are waiting to finish network I/O". When I look at the offending thread in Process Explorer it reveals nothing interesting: ntdll.dll!ZwWaitForMultipleObjects+0xa KERNELBASE.dll!GetCurrentThread+0x36 kernel32.dll!WaitForMultipleObjectsEx+0xb3 USER32.dll!PeekMessageW+0x1cd USER32.dll!MsgWaitForMultipleObjectsEx+0x2a USER32.dll!MsgWaitForMultipleObjects+0x20 SHELL32.dll!SHAppBarMessage+0x41e SHELL32.dll!DragAcceptFiles+0x2a3c SHELL32.dll!DragAcceptFiles+0x2a4f SHELL32.dll!Ordinal211+0x124 SHELL32.dll!SHChangeNotification_Unlock+0x12f4 USER32.dll!GetSystemMetrics+0x2b1 USER32.dll!IsDialogMessageW+0x19b USER32.dll!IsDialogMessageW+0x1e1 ntdll.dll!KiUserCallbackDispatcher+0x1f USER32.dll!PeekMessageW+0xba USER32.dll!PeekMessageW+0x89 SHELL32.dll!SHChangeNotification_Unlock+0xd9f SHELL32.dll!Ordinal885+0x1407 SHLWAPI.dll!SHRegGetUSValueW+0x306 kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x21 While I was looking at the explorer.exe threads I did notice a fair few that talk about ETW (Event Tracing for Windows) so obviously explorer.exe uses tracing. So I decided to try and user TraceView.exe to try and listen in on the explorer.exe traces. The problem is TraceView requires some difficult-to-come-by stuff... either pdbs, or CTL files, and .TMF files. I tried using the explorer.pdb that comes with the Windows SDK but that did not work. I do not see explorer.exe in the "named providers". And I have no idea where to locate the ctl or .TMF files for explorer.exe. So the question is: Is there a way to view the ETW trace messages from explorer? Or shall I just not bother and go back to the age old technique of disabling every explorer extenion one-by-one in the hope its one of them. (Prefer the former as I like to get to the bottom of things!!)

    Read the article

  • Why do I get a DegradedArray event with mdadm

    - by azera
    Hello Just so we're clear on what's happening: I bought 4 new sata 2 drives, with the intent of using them in a raid5 all drive are fully recognised by both my bios and my linux box (gentoo) I created a raid5 array, fiddled a bit with it to understand how it works, how to monitor ect At some point, this triggered a degradedarray event, even though the array is brand new. I tried to stopping the array and recreating a new array with the same drive but the new array starts degraded too. here is what I used to create it mdadm --create -l5 -n4 /dev/md/md0-r5 /dev/sdb /dev/sdd /dev/sde /dev/sdf here are the output from my /proc/mdstat and mdadm --detail --scan **mdstat** Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid5 sdf[4] sde[2] sdd[1] sdb[0] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 2.8% (41689732/1465138496) finish=890.3min speed=26645K/sec unused devices: <none> **detail** ARRAY /dev/md/md0-r5 metadata=0.90 spares=1 UUID=453e2833:81f22a74:64188b84:66721085 As such I have a couple questions: does a raid5 array always start in degraded mode at first ? why does sdf have the number 4 between bracket instead of 3, why does it see a spare disk and why is the 4th drive marked with _ instead of U ? (bad configuration ?) How can I recreate the array from scratch, do i have to format each drive on its own before recreating it ? Thanks for any help, I'm not sure about what I should do at the moment

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • Firefox 3.6 and above. Always show one tab even if all tabs are closed like Firefox 3.0

    - by Jayapal Chandran
    I very much got used to Firefox 3.0. In that, to free the memory, I close all tabs but still the main Firefox window does not close. But in Firefox 3.6 and later if I want to close all tabs and if I do so then Firefox totally exists. This is not the case with 3.0. How to stop Firefox from not closing the main process even if I close all threads (tabs)? The autocomplete feature in the address bar of Firefox 3.6 and greater is in a dark blue color which makes me very much annoyed. With my environment and the monitor glare that is inducing anger in me, so how the color be changed to be like Firefox 3.0? Because you know that black and white are a neutral and good combination and since I have been working in Firefox 3.0 (and earlier versions) for a long time this new color change and other uncomfortable options are making me sick. To check CSS3 I need to use Firefox 3.5 and greater. Besides I like Firefox because it includes the W3C's recommendations so I can learn and test new recomendations from W3C.

    Read the article

  • MySQL problem with many concurrent connections

    - by user48303
    Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections. [error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7 I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >