Search Results

Search found 11594 results on 464 pages for 'pointer events'.

Page 385/464 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • How to minimize the risk of employees spreading critical information?

    - by Industrial
    Hi everyone, What's common sense when it comes to minimising the risk of employees spreading critical information to rivalling companies? As of today, it's clear that not even the US government and military can be sure that their data stays safely within their doors. Thereby I understand that my question probably instead should be written as "What is common sense to make it harder for employees to spread business critical information?" If anyone would want to spread information, they will find a way. That's the way life work and always has. If we make the scenario a bit more realistic by narrowing our workforce by assuming we only have regular John Does onboard and not Linux-loving sysadmins , what should be good precautions to at least make it harder for the employees to send business-critical information to the competition? As far as I can tell, there's a few obvious solutions that clearly has both pros and cons: Block services such as Dropbox and similar, preventing anyone to send gigabytes of data through the wire. Ensure that only files below a set size can be sent as email (?) Setup VLANs between departments to make it harder for kleptomaniacs and curious people to snoop around. Plug all removable media units - CD/DVD, Floppy drives and USB Make sure that no configurations to hardware can be made (?) Monitor network traffic for non-linear events (how?) What is realistic to do in a real world? How does big companies handle this? Sure, we can take the former employer to court and sue, but by then the damage has already been caused... Thanks a lot

    Read the article

  • nginx proxying different servers for different subdomains

    - by The.Anti.9
    i just set up an nginx server. On the same computer as nginx, I have apache running on port 8000 (this was previously set up.) and I want no subdomain and the www. subdomain to go to the local apache instance. But i want the stuff. subdomain to link to my server where i keep all my miscellaneous files (pictures, documents, etc.), which is also listening on port 80 at the ip 192.168.1.102. I tried configuring it, but when i go to my domain, I just get the "Welcome to nginx!". Here's what I have: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name theanti9.com www.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://localhost:8000; } } server { listen 80; server_name stuff.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://192.168.1.102:80; } } } I'm not really sure what's wrong. Any suggestions?

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Only one domain is not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I had a sysadmin friend try this DNS lookup on servers at several companies that he consults for (which are also Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • nginx: js file loads indifferently every refresh

    - by poymode
    I have this nginx problem wherein a js file in a rails app loads indifferently. Whenever I try to access the JS file in the browser and refresh the page, the scrollbar changes length meaning sometimes it loads half the js page, sometimes the whole and sometimes just a part of it. the js file size is 71K. my nginx server is on different server,separate from my rails app. when I try to access the js file directly through the app server, lets say 10.48.30.150:3000/javascripts/file.js it works fine and doesnt show any half-loaded page. but when I use the nginx server which upstreams the rails app, it shows the indifferent page loads. here is my nginx http conf error_log /usr/local/nginx/logs/error.log; pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 256; access_log /usr/local/nginx/logs/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 0; tcp_nodelay on; #gzip on; #gzip_min_length 4096; #gzip_buffers 16 8k; #gzip_types application/x-javascript text/css text/plain; large_client_header_buffers 4 8k; client_max_body_size 2G; include /usr/local/nginx/conf.d/*.conf; }

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]domain.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Push, parse & import "selected" data, text, info blobs from Webpages/ Emails as Event/ Appointment to standard Calendar directly or as .ics file?

    - by Alex S
    Any tool, plugin, extension, script/ code to push "selected" data, text, information blobs from Web pages, Emails etc, then parsed and imported to structured Event, Appointment (e.g. .ics) on a standard Calendar like Outlook, Google, iCal? If not, what and how could I use some scripting, coding or existing tools, extensions to add on top and do this. I come across a lot of unstructured information on Webpages, Emails, FB events etc. where I just want to add that information to my Calendar. Instead of entering all the information by hand all the time, there should be an easy enough way to have the information get parsed, organized and imported to a Calendar... Either directly to a calendar from source or Translated to a standard format such as .ICS that can be imported & saved easily. Would love to see some suggestions for this incorporating one or more of the following: on Windows with Chrome & Outlook on iPhone/ iPad to its Calendar PS: I'll come back and see if I can add more information to this question and to answer it as well. I have not found a solution yet.

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • Kerberos issues after new server of same name joined to domain

    - by MentalBlock
    Environment: Windows Server 2012, 2 Domain Controllers, 1 domain. A server called Sharepoint1 was joined to the domain (running Sharepoint 2013 using NTLM). The fresh install for Sharepoint1 (OS and Sharepoint) is performed and set up for Kerberos and joined to the domain using the same name. Two SPNs added for HTTP/sharepoint1 and HTTP/sharepoint1.somedomain.net for account SPFarm. Active Directory shows a single, non-duplicate computer account with a create date of the first server and a modify date of the second server creation. A separate server also on the domain has the server added to All Servers in Server Manager. This server shows a local error in the events exactly like This from Technet (Kerberos error 4 - KRB_AP_ERR_MODIFIED). Question: Can someone help me understand if the problem is: The computer account is still the old account and causing a Kerberos ticket mismatch (granted some housekeeping in AD might have prevented this) (In my limited understanding of Kerberos and SPNs) that the SPFarm account used for the SPNs is somehow mismatched with HTTP calls made by the remote server management tools services in Windows Server 2012 Something completely different? I am leaning towards the first one, since I tested the same SPNs on another server and it didn't seem to cause the same issue. If this is the case, can it be easily and safely repaired? Is there a proper way to either reset the account or better yet, delete and re-add the account? Although it sounds simple enough with some powershell or clicking around in AD Users and Computers, I am uncertain what impact this might have on an existing server, particularly one running SharePoint. What is the safest and simplest way to proceed? Thanks!

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • How to access Virtual machine using powershell script

    - by Sheetal
    I want to access the virtual machine using powershell script. For that I used below script, Enter-PSSession -computername sheetal-VDD -credential compose04.com\abc.xyz1 where, sheetal-VDD is hostname of virtual machine compose04.com is the domain name of virtual machine and abc.xyz1 is the username of virtual machine After entering above command , it asks for password. When the password is entered I get below error, Enter-PSSession : Connecting to remote server failed with the following error message : WinRM cannot process the reques t. The following error occured while using Kerberos authentication: There are currently no logon servers available to s ervice the logon request. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or us e HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + Enter-PSSession <<<< -computername sheetal-VDD -credential compose04.com\Sheetal.Varpe + CategoryInfo : InvalidArgument: (sheetal-VDD:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed Can someone help me out in this?

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • How to delete removed devices from a mdadm RAID1?

    - by Kabuto
    I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed. Here's the RAID in question. Note the two devices (0 and 1) with state removed. $ mdadm --detail /dev/md1 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. /dev/md1: Version : 00.90 Creation Time : Thu May 20 12:32:25 2010 Raid Level : raid1 Array Size : 1454645504 (1387.26 GiB 1489.56 GB) Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 12 21:30:39 2013 State : clean, degraded Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4 Events : 0.8717546 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2 3 8 18 - spare /dev/sdb2 4 8 2 - spare /dev/sda2 How do I get rid of these devices and add the new partitions as active RAID devices? Update I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish. All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason. Number Major Minor RaidDevice State 3 8 2 0 spare rebuilding /dev/sda2 4 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • SQL Server log backups “stalling”

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >