Search Results

Search found 4735 results on 190 pages for 'handling interruptions'.

Page 164/190 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Spanning-tree setup with incompatible switches

    - by wfaulk
    I have a set of eight HP ProCurve 2910al-48G Ethernet switches at my datacenter that are set up in a star topology with no physical loops. I want to partially mesh the switches for redundancy and manage the loops with a spanning-tree protocol. However, our connection to the datacenter is provided by two uplinks, each to a Cisco 3750. The datacenter's switches are handling the redundant connection using PVST spanning-tree, which is a Cisco-proprietary spanning-tree implementation that my HP switches do not support. It appears that my switches are not participating in the datacenter's spanning-tree domain, but are blindly passing the BPDUs between the two switchports on my side, which enables the datacenter's switches to recognize the loop and put one of the uplinks into the Blocking state. This is somewhat supposition, but I can confirm that, while my switches say that both of the uplink ports are forwarding, only one is passing any real quantity of data. (I am assuming that I cannot get the datacenter to move away from PVST. I don't know that I'd want them to make that significant of a change anyway.) The datacenter has also sent me this output from their switches (which I have expurgated of any identifiable info): 3750G-1#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 5 (GigabitEthernet1/0/5) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 0018.73d3.yyyy Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/5 Root FWD 4 128.5 P2p Gi1/0/6 Altn BLK 4 128.6 P2p Gi1/0/8 Altn BLK 4 128.8 P2p and: 3750G-2#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 6 (GigabitEthernet1/0/6) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 000f.f71e.zzzz Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/1 Desg FWD 4 128.1 P2p Gi1/0/5 Altn BLK 4 128.5 P2p Gi1/0/6 Root FWD 4 128.6 P2p Gi1/0/8 Desg FWD 4 128.8 P2p The uplinks to my switches are on Gi1/0/8 on both of their switches. The uplink ports are configured with a single tagged VLAN. I am also using a number of other tagged VLANs in my switch infrastructure. And, to be clear, I am passing the tagged VLAN I'm receiving from the datacenter to other ports on other switches in my infrastructure. My question is: how do I configure my switches so that I can use a spanning tree protocol inside my switch infrastructure without breaking the datacenter's spanning tree that I cannot participate in?

    Read the article

  • How to iptables forward ppp0 to eth0

    - by HPHPHP2012
    need your help with get it routing properly. I've server with eth0 (external interface) and eth1(internal interface). eth1 is merged into the bridge br0 (172.16.1.1) I've installed the pptp and successfully configured it, so I got ppp0 interface (192.168.91.1) and got my VPN clients successfully connected. So I need your help to manage how to allow my VPN clients use internet connection (eth0). Below my configuration files, any help is much appreciated! Thank you! P.S. VPN clients are Windows Xp, Windows 7, Mac OS X Lion, Ubuntu 12.04, iOS 5.x cat /etc/pptpd.conf #local server ip address localip 192.168.91.1 #remote addresses remoteip 192.168.91.11-254,192.168.91.10 #translating ip addresses on this interface bcrelay br0 cat /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 8.8.8.8 ms-dns 8.8.4.4 nodefaultroute lock nobsdcomp auth logfile /var/log/pptpd.log cat /etc/nat-up #!/bin/sh SERVER_IP="aaa.aaa.aaa.aaa" LOCAL_IP="172.16.1.1" #eth0 with public ip PUBLIC="eth0" #br0 is internal bridge on eth1 interface INTERNAL="br0" #vpn VPN="ppp0" #local LOCAL="lo" iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -A INPUT -i $LOCAL -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i $PUBLIC -j ACCEPT ####CLEAR CONFIG#### #iptables -A FORWARD -i $PUBLIC -o $INTERNAL -m state --state ESTABLISHED,RELATED -j ACCEPT #iptables -A FORWARD -i $PUBLIC -o $INTERNAL -j ACCEPT #iptables -A FORWARD -i $INTERNAL -o $PUBLIC -j ACCEPT #iptables -t nat -A POSTROUTING -j MASQUERADE ####THIS PART IS NOT HANDLING IT#### iptables -A FORWARD -i $PUBLIC -o $VPN -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i $PUBLIC -o $VPN -j ACCEPT iptables -A FORWARD -s 192.168.91.0/24 -o $PUBLIC -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.91.0/24 -o $PUBLIC -j MASQUERADE # VPN - PPTPD iptables -A INPUT -p gre -s 0/0 -j ACCEPT iptables -A OUTPUT -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --dport 1723 -j ACCEPT #SSH iptables -A INPUT -p tcp --dport 2222 -j ACCEPT iptables -A OUTPUT -p tcp --sport 2222 -j ACCEPT #BLACKLIST BLOCKDB="/etc/ip.blocked" IPS=$(grep -Ev "^#" $BLOCKDB) for i in $IPS do iptables -A INPUT -s $i -j DROP iptables -A OUTPUT -d $i -j DROP done

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • Gateway GT5220 Boot/POST Failure

    - by John Rudy
    I have a Gateway GT5220 I'm troubleshooting. It is, in fact, the machine I just gave my father for his birthday a couple months ago. (Prior to that, it was my home PC. My home PC is now the MacBook on which I'm writing this.) Before going any further, I suspect that the answer will be, "It's worse than that, it's dead, Jim, it's dead, Jim, it's dead, Jim." At least, mobo and/or CPU. The initial symptoms were as follows: Turn on power All fans fire up (thus making it so I can't hear if the hard drive is spinning or not, nor are my hands sensitive enough anymore to feel it) No LEDs remained lit on the front panel. (Initially, the hard drive indicator flashed briefly.) No beep, no video, no nothing. Following some advice I found here, I tried to "drain the stored power." After following those steps, the new symptoms were: Turn on power All fans fire up The front panel LEDs remained lit! After about 20, maybe 30 seconds, we had video! Sort of. We got to the Gateway splash/POST screen, which appeared thoroughly corrupted. How corrupted? Well, I imagine it's what a POST screen would look like after reading the wrong passage out of the Necronomicon: It stayed there. I gave it at least 5, maybe 6 minutes, and it didn't move. So I shut her down, started her up again, and now (this is where we currently stand, symptomatically) we have this: Turn on power All fans fire up The front panel LEDs remain lit No video, no beep, no nothing. I'm a software guy; haven't done real hardware troubleshooting in years. My gut tells me that the mobo and/or CPU is fried, and unfortunately my gut didn't get to be as big as it is being wrong all the time. :( In addition to the link above, I have read all of the following (trying to save you some LMGTFY trouble): Gateway Support POST Error Messages and Handling About a zillion (useless) POST beep code sites A kioskea.net post indicating that most likely we're at what I consider "total loss" (mobo and/or CPU) My questions: Are there any conditions other than mobo/CPU that could cause symptoms like these? Is it worth my time to try the next hardware troubleshooting step?(IE, remove all non-critical hardware from the machine, try to boot, systematically replace one by one until we find the failing component) Which mobos will fit in the Gateway GT5220 case (with rear ports correctly aligned)? (Why this is not a dupe: I wouldn't have posted this question if it hadn't been for the funkadelic possessed video display on the one occasion we got video out. I think that justified this not being an exact dupe. Of course, if the community overrules, I will understand.)

    Read the article

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • lighttpd: weird behavior on multiple rewrite rule matches

    - by netmikey
    I have a 20-rewrite.conf for my php application looking like this: $HTTP["host"] =~ "www.mydomain.com" { url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/my_app.php" ) } I want to be able to put the webserver in kind of a "maintenance" mode while I update my application from scm. To do this, my idea was to enable an additional rewrite configuration file before this one. The 16-rewrite-maintenance.conf file looks like this: url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/maintenance_app.php" ) Now, on the maintenance page, I have a logo that doesn't get loaded. I get a 404 error. Lighttpd debug says the following: 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.349) -- sanatising URI 2012-12-13 20:28:06: (response.c.350) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (mod_access.c.135) -- mod_access_uri_handler called 2012-12-13 20:28:06: (response.c.470) -- before doc_root 2012-12-13 20:28:06: (response.c.471) Doc-Root : /www 2012-12-13 20:28:06: (response.c.472) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.473) Path : 2012-12-13 20:28:06: (response.c.521) -- after doc_root 2012-12-13 20:28:06: (response.c.522) Doc-Root : /www 2012-12-13 20:28:06: (response.c.523) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.524) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.541) -- logical -> physical 2012-12-13 20:28:06: (response.c.542) Doc-Root : /www 2012-12-13 20:28:06: (response.c.543) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.544) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.561) -- handling physical path 2012-12-13 20:28:06: (response.c.562) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.618) -- file not found 2012-12-13 20:28:06: (response.c.619) Path : /www/img/content/logo.png, /img/content/logo.png Any clue on why lighttpd matches both rules (from my application rewrite config and from my maintenance rewrite config) and concatenates them with a comma - that doesn't seem to make any sense?! Shouldn't it stop after the first match with rewrite-once?

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • windows server 2008 r2 remote desktop issue with roaming clients

    - by Patrick D'Haese
    I have the following situation : a Dell windows server 2008 R2 computer, with remote desktop services installed. I have installed a java application making use of a PostgreSql database, and made this application available for clients using RDP. Clients are standard Win XP pc's and Psion Neo handheld devices running Windows CE 5 Pro. The application works fine for clients on standard XP pc's connected directly via cat 5E Ethernet cable to a Dell Powerconnect 2816 switch. The Psion Neo clients connect wireless to the network via Motorola AP6532 access points. These access points are connected via a POE adapter to the same switch as the XP pc's. The Psion devices can connect without any problem and very quickly to the server and to the application using RDP. So far, so good. When the Psion devices move around in the warehouse, and they roam from one access point to the other, the RDP session on the client freezes for approx 1 minute, and then it automatically resumes the session. This freezing is very annoying for the users. Can anyone help in solving this issue? Update (August 9) : After re-installing the access points we have a working situation, but only when connecting to the RDP host : * via a Win Xp SP3 laptop * via a Symbol MC9190 Win CE 6 mobile device When roaming we notice a small hick-up less then 1 second, what is very acceptable. With the Psion NEO it's still not working, when roaming the screen freezes from 2 to 30 seconds. The RDP client on the win xp sp3 laptop and the symbol mc9190 is version 6.0. The RDP client on the neo is version 5.2. I have changed the security layer on the RDP host to RDP security layer (based on forums on the internet), because older RDP clients seem to have issues with the RDP 7.1 protocol on the Win server 2088 R2. Psion adviced us to do some network logging activity on the different devices. We made this logging via wireshark, and based on this the conclusion of Psion is that the server fails in handling tcp-requests. Can anyone give me a second opinion by analysing the wireshark loggings. Thanks in advance. Regards Patrick

    Read the article

  • Mod_rewrite with UTF-8 accent, multiviews , .htaccess

    - by GuruJR
    Problem: with Mod_rewrite, multiview & Apache config Introduction: The website is in french and i had problem with unicode encoding and mod_rewrite within php wihtout multiviews Old server was not handling utf8 correctly (somewhere between PHP, apache mod rewrite or mysql) Updated Server to Ubuntu 11.04 , the process was destructive lost all files in var/www/ (the site was mainly 2 files index.php & static.php) lost the site specific .Htaccess file lost MySQL dbs lost old apache.conf What i have done so far: What works: Setup GNutls for SSL, Listen 443 = port.conf Created 2 Vhosts in one file for :80 and :443 = website.conf Enforce SSL = Redirecting :80 to :443 with a mod_rewrite redirect Tried to set utf-8 everywhere.. Set charset and collation , db connection , mb_settings , names utf-8 and utf8_unicode_ci, everywhere (php,mysql,apache) to be sure to serve files as UTF-8 i enabled multiview renamed index.php.utf8.fr and static.php.utf8.fr With multiview enabled, Multibytes Accents in URL works SSL TLS 1.0 What dont work: With multiview enabled , mod_rewrite works for only one of my rewriterules With multiview Disabled, i loose access to the document root as "Forbidden" With multiview Disabled, i loose Multibytes (single charater accent) The Apache Default server is full of settings. (what can i safely remove ?) these are my configuration files so far :80 Vhost file (this one work you can use this to force redirect to https) RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} LanguagePriority fr :443 Vhost file (GnuTls is working) DocumentRoot /var/www/x ServerName example.com ServerAlias www.example.com <Directory "/var/www/x"> allow from all Options FollowSymLinks +MultiViews AddLanguage fr .fr AddCharset UTF-8 .utf8 LanguagePriority fr </Directory> GnuTLSEnable on GnuTLSPriorities SECURE:+VERS-TLS1.1:+AES-256-CBC:+RSA:+SHA1:+COMP-NULL GnuTLSCertificateFile /path/to/certificate.crt GnuTLSKeyFile /path/to/certificate.key <Directory "/var/www/x/base"> </Directory> Basic .htaccess file AddDefaultCharset utf-8 Options FollowSymLinks +MultiViews RewriteEngine on RewriteRule ^api/$ /index.php.utf8.fr?v=4 [L,NC,R] RewriteRule ^contrib/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^coop/$ /index.php.utf8.fr?v=3 [L,NC,R] RewriteRule ^crowd/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^([^/]*)/([^/]*)$ /static.php.utf8.fr?VALUEONE=$2&VALUETWO=$1 [L] So my quesiton is whats wrong , what do i have missing is there extra settings that i need to kill from the apache default . in order to be sure all parts are using utf-8 at all time, and that my mod_rewrite rules work with accent Thank you all in advance for your help, I will follow this question closely , to add any needed information.

    Read the article

  • Suspicious process running under user named

    - by Amit
    I get a lot of emails reporting this and I want this issue to auto correct itself. These process are run by my server and are a result of updates, session deletion and other legitimate session handling reported as false positives. Here's a sample report: Time: Sat Oct 20 00:00:03 2012 -0400 PID: 20077 Account: named Uptime: 326117 seconds Executable: /usr/sbin/nsd\00507d27e9\0053\00\00\00\00\00 (deleted) The file system shows this process is running an executable file that has been deleted. This typically happens when the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. See csf.conf and the PT_DELETED text for more information about the security implications of processes running deleted executable files. Command Line (often faked in exploits): /usr/sbin/nsd -c /etc/nsd/nsd.conf Network connections by the process (if any): udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 udp: 127.0.0.1:53 -> 0.0.0.0:0 udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: 127.0.0.1:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 Files open by the process (if any): /dev/null /dev/null /dev/null Memory maps by the process (if any): 0045e000-00479000 r-xp 00000000 fd:00 2582025 /lib/ld-2.5.so 00479000-0047a000 r--p 0001a000 fd:00 2582025 /lib/ld-2.5.so 0047a000-0047b000 rw-p 0001b000 fd:00 2582025 /lib/ld-2.5.so 0047d000-005d5000 r-xp 00000000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d5000-005d7000 r--p 00157000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d7000-005d8000 rw-p 00159000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d8000-005db000 rw-p 005d8000 00:00 0 005dd000-005e0000 r-xp 00000000 fd:00 2582087 /lib/libdl-2.5.so 005e0000-005e1000 r--p 00002000 fd:00 2582087 /lib/libdl-2.5.so 005e1000-005e2000 rw-p 00003000 fd:00 2582087 /lib/libdl-2.5.so 0062b000-0063d000 r-xp 00000000 fd:00 2582079 /lib/libz.so.1.2.3 0063d000-0063e000 rw-p 00011000 fd:00 2582079 /lib/libz.so.1.2.3 00855000-0085f000 r-xp 00000000 fd:00 2582022 /lib/libnss_files-2.5.so 0085f000-00860000 r--p 00009000 fd:00 2582022 /lib/libnss_files-2.5.so 00860000-00861000 rw-p 0000a000 fd:00 2582022 /lib/libnss_files-2.5.so 00ac0000-00bea000 r-xp 00000000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bea000-00bfe000 rw-p 00129000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bfe000-00c01000 rw-p 00bfe000 00:00 0 00e68000-00e69000 r-xp 00e68000 00:00 0 [vdso] 08048000-08074000 r-xp 00000000 fd:00 927261 /usr/sbin/nsd 08074000-08079000 rw-p 0002b000 fd:00 927261 /usr/sbin/nsd 08079000-0808c000 rw-p 08079000 00:00 0 08a20000-08a67000 rw-p 08a20000 00:00 0 b7f8d000-b7ff2000 rw-p b7f8d000 00:00 0 b7ffd000-b7ffe000 rw-p b7ffd000 00:00 0 bfa6d000-bfa91000 rw-p bffda000 00:00 0 [stack] Would /etc/nsd/restart or kill -1 20077 solve the problem?

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • How to remove RAID flag on unstriped drive without losing data?

    - by Alex Folland
    I have a Gigabyte Z68X-UD4-B3 motherboard. It advertises this new thing called "XHD", which is like RAID but makes a SSD and traditional-style drive work together to enable high speed with high capacity. I don't want to use this feature, and I already have Windows 7 64 installed without using this feature. When I first installed my 2 hard drives (1 SSD and 1 traditional-style drive) in my machine and booted it up for the first time, it ran a program from the mobo that asked me if I wanted to set up XHD. Thinking it would go to some config screen, I said yes. It immediately started doing something with my drives and finished. I considered that strange, but figured it wouldn't matter when I simply install Windows onto my SSD only. I now have my BIOS and Windows running in AHCI mode with no RAID arrays and separate drives. My SSD is one of those new Corsair Force GT drives which loses power every so often, causing Windows to BSOD. I've figured everything out about this problem, including installing the latest firmware from Corsair, and the only way to fix it at this point is by installing Intel Rapid Storage Technology to control AHCI instead of Windows, since the Windows AHCI driver disables the drive's power every once in a while and can't be configured not to do so. I've tried installing Intel Rapid Storage Technology. When I reboot my machine after doing so, it BSODs just after the Windows logo. I've figured out this is because my SSD and my traditional drive are flagged as RAID, as seen in the "Intel Matrix Storage Manager" program found by switching the BIOS hard drive handling to "RAID" mode. This is due to the XHD auto-config program I mentioned earlier. Normally, the BIOS is set to AHCI, and when the drives boot in AHCI mode, they work perfectly. So, I've concluded the data is stored in AHCI mode but the drives' flags are set to RAID. I've figured out that I can accomplish my objective by using the "Intel Matrix Storage Manager" program on the mobo (with "Reset disks to non-RAID"), but doing so would cause it to completely wipe the drives I select. I want to simply toggle these flags from RAID to AHCI so Intel Rapid Storage Technology doesn't fail and cause a BSOD upon booting, but without wiping the drives.

    Read the article

  • Linux bonded Interfaces hanging periodically

    - by David
    I've several hosts that are showing problems with connectivity. When working from the command line, for example, typing is frozen for a second or so, then recovers - then it does it again. The most egregious example host would freeze (input) for 15-30 seconds, then recover and go out 5 seconds later. Switching cables didn't do anything - but removing one of the physical cables caused everything to clear up instantly (which why I think this is a network problem). Looking at the network I couldn't see any packets floating that would explain this. These ethernet interfaces (Gigabit Dell) were working normally previously, but since we moved the systems - and put them on a new set of switches - this has been a problem on multiple theoretically identically-configured hosts. The original switches were an HP Procurve 1810-24G and an HP Procurve 1800-24G connected with LLDP; the new switches are both Cisco SG 200-26, which I understand are rebranded Linksys switches. Is this caused by a problem with the switches? Is it the switch configurations? Are the Cisco switches incapable of handling this? I don't see where the configuration is located; I searched the usual /etc/sysconfig/network/devices but there's nothing in there about options (like mii polling) and nothing about the method of balancing the two. Searching scripts, I can't find anything in /etc/init.d/network either. The hosts are almost all Red Hat Enterprise Linux 5.x systems (5.6, 5.7) but some are Ubuntu Server 10.04.3 Lucid Lynx. I need help with both if it comes to that. UPDATE: We're also seeing some problems with servers on the original switches. The HP switches and the Cisco switches are also interconnected (temporarily); there is a cable run from one switch to the next. Pings on any of these hosts show about one ICMP packet out of every 5-6 getting dropped (timed out). Could there be an interaction between the two switches? Oh, and the hosts are using bonding with Balance-RR as the method.

    Read the article

  • High CPU usage - symptoms moving from server to server after bouncing

    - by grt3kl
    First off, I apologize if I didn't include enough information to properly troubleshoot this issue. This sort of thing isn't my specialty, so it is a learning process. If there's something I need to provide, please let me know and I'll be happy to do what I can. The images associated with my question are at the bottom of this post. We are dealing with a clustered environment of four WebLogic 9.2 Java application servers. The cluster utilizes a round-robin load algorithm. Other details include: Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) BEA JRockit(R) (build R27.4.0-90_CR352234-91983-1.5.0_12-20071115-1605-linux-x86_64, compiled mode) Basically, I started looking at the servers' performance because our customers are seeing lots of lag at various times of the day. Our servers should easily handle the loads they are given, so it's not clear what's going on. Using HP Performance Manager, I generated some graphs that indicate that the CPU usage is completely out of whack. It seems that, at any given point, one or more of the servers has a CPU utilization of over 50%. I know this isn't particularly high, but I would say it is a red flag based on the CPU utilization of the other servers in the WebLogic cluster. Interesting things to note: The high CPU utilization was occurring only on server02 for several weeks. The server crashed (extremely rare; we are not sure if it's related to this) and upon starting it back up, the CPU utilization was normal on all 4 servers. We restarted all 4 managed servers and the application server (on server01) yesterday, on 2/28. As you can see, server03 and server04 picked up the behavior that was seen on server02 before. The CPU utilization is a Java process owned by the application user (appown). The number of transactions is consistent across all servers. It doesn't seem like any one server is actually handling more than another. If anyone has any ideas or can at least point me in the right direction, that would be great. Again, please let me know if there is any additional information I should post. Thanks!

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • SharePoint 2010 – Central Admin tooling to create host header site collections

    - by eJugnoo
    Just like SharePoint 2007, you can create host-header based site collections in SharePoint 2010 as well. It means, that you do not necessarily need to create a site-collection under a managed path like /sites/, you can create multiple root-level site collections on same web-application/port by using host-header site collections. All you need to do is point your domain or sub-domain to your web-application and create a matching site-collection that you want. But, just like in 2007, it is something that you do by using STSADM, and is not available on Central Admin UI in 2010 as well. Yeah, though you can now also use PowerShell to create one: C:\PS>$w = Get-SPWebApplication http://sitename   C:\PS>New-SPSite http://www.contoso.com -OwnerAlias "DOMAIN\jdoe" -HostHeaderWebApplication $w -Title "Contoso" -Template "STS#0"   This example creates a host header site collection. Because the template is provided, the root Web of this site collection will be created. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve been playing with WCM in SharePoint 2010 more and more, and for that I preferred creating hosts file entries for desired domains and create site-collections by those headers – in my dev environment. I used PowerShell initially, but then got interested to build my own UI on Central Admin instead. Developed with Visual Studio 2010 So I used new Visual Studio 2010 tooling to create an empty SharePoint 2010 project. Added an application page (there is no option to add _Admin page item in VS 2010 RC), that got created in Layouts “mapped” folder. Created a new Admin mapped folder for 14-“hive”, and moved my new page there instead. Yes, I didn’t change the base class for page, its just that it runs under _admin, but it is indeed a LayoutsPageBase inherited page. To introduce a action-link in Central Admin console, I created following element: 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> 2: <CustomAction 3: Id="CreateSiteByHeader" 4: Location="Microsoft.SharePoint.Administration.Applications" 5: Title="Create site collections by host header" 6: GroupId="SiteCollections" 7: Sequence="15" 8: RequiredAdmin="Delegated" 9: Description="Create a new top-level web site, by host header" > 10: <UrlAction Url="/_admin/OfficeToolbox/CreateSiteByHeader.aspx" /> 11: </CustomAction> 12: </Elements> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Used Reflector to understand any special code behind createpage.aspx, and created a new for our purpose – CreateSiteByHeader.aspx. From there I quickly created a similar code behind, without all the fancy of Farm Config Wizard handling and dealt with alternate implementations of sealed classes! Goal was to create a professional looking and OOB-type experience. I also added Regex validation to ensure user types a valid domain name as header value. Below is the result…   Release @ Codeplex I’ve released to WSP on OfficeToolbox @ Codeplex, and you can download from here. Hope you find it useful… -- Sharad

    Read the article

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >