Search Results

Search found 9271 results on 371 pages for 'drop handler'.

Page 273/371 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Xen PV packet loss

    - by Delphinator
    I'm having some serious issues with packetloss with one of my servers. This server is a somewhat old (P4-era) machine running Debian Squeeze and Xen 4.0. There are two domUs running on it (both also Debian Squeeze), one gateway and a fileserver. Unfortunatly the processor has no virtualization extensions, therefore only PV can be used. While investigating why our network seems to be slower than it should I found some pretty bad packet loss (~25%). After further investigation and several experiments I did a measurment between the dom0 and one of the domUs: Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to dom0, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2(domU) port 33817 connected with 192.168.1.100(dom0) port 5001 [ 4] local 192.168.1.2(domU) port 5001 connected with 192.168.1.100(dom0) port 48606 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 46.3 MBytes 38.7 Mbits/sec [ 3] Sent 33020 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 46.2 MBytes 38.6 Mbits/sec 0.030 ms 89/33019 (0.27%) [ 3] 0.0-10.0 sec 1 datagrams received out-of-order [ 4] 0.0-10.2 sec 43.0 MBytes 35.3 Mbits/sec 13.074 ms 11575/42256 (27%) tl;dr: 27% packet loss from dom0 to domU with 50Mbit UDP packets. Same thing happens from anywhere in the network. The problem gets better for smaller bandwidths (0.047% for 5Mbit) and worse for higher (59% for 200Mbit) ones. I did increase the CPU-weight of the dom0, there is no swapping going on, and actual networking-hardware is not involved. I never expected Xen (or anything related) to drop packets, and I'm completly clueless what to try next.

    Read the article

  • Win2008/IIS7/fx2.0 - 500.19 error

    - by Keith Barrows
    I installed new boxes at the beginning of the week. 1) Web Server on Win2008 x64, IIS 7 + all updates 2) DB Server on Win2008 x64, SQL 2008 Ent + all updates I configured my websites, set up host headers and DNS entries, worked through some problems on my handlers and finally got it all running Wednesday morning. Our team has been using it since then. This morning I came in and everyone of us is getting a 500 error. Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \?\C:\RivWorks\dev\web.config Requested URL http://dev.rivworks.com:80/login.aspx Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: Links and More InformationThis error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. I’ve gone through the KB articles, made sure IIS_IUSRS had read permissions and am now stumped. What bothers me is IIS is looking in \?\C:\ instead of just C:. What is happening? TIA

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Setting Outlook Web Access as default mail client in Firefox

    - by Barton Chittenden
    The company that I work for is nice enough to let me use Linux for work, but they use Outlook. I've been using Outlook Web Access (OWA) as my mail client, which is more or less acceptable. The only problem is that whenever I click on a mailto link or use the "Send Link" menu option in firefox, I'm prompted to use evolution. Since connecting to an exchange server through evolution seems to be sketchy at best, I would like to set OWA as my default mail client. I'm using Firefox 3.6.13 Here's what I've found so far: Default mail client can be found at Edit Menu -> Preferences -> Applications Tab -> mailto When I click on the drop down menu, one of the options is "Application Details" This shows two options by default: Google Yahoo! Mail Each of these shows how to launch that service. For Gmail: https://mail.google.com/mail/?extsrc=mailto&url=%s For Yahoo!: http://compose.mail.yahoo.com/?To=%s I presume that Outlook Web Access has something similar. Based on the googling that I've done so far, I think that this should look something like this: https://<server name>/owa/?cmd=compose... A little experimentation on my part shows that the following will compose a message: https://<email server>/owa/?ae=Item&a=New&t=IPM.Note but I still don't know how to specify recipient, subject or body of the email to be composed... What I want to know is a) does anyone know the URL parameters to compose a mailto in Outlook Web Access, including subject, recipient and body? else b) can someone give me a decent pointer for where to get this information?

    Read the article

  • Redis as substitution for Memcache

    - by Boban P.
    We have distributed web app, and for now, as session handler, we use two separate instances of memcache in redundancy, so everything that is written in one memcache is also written in other. Memcache is fairly easy to install, use, and maintain but we have one problem: if one memcache fail, everything is fine, php comunicate with other instance which has all data (although, half of connections have a delay because they try to use failed one, wait a little, and then contact other memcache). When failed instance comes back to life again, it starts up empty. If established session request data from that instance, session fails, and user logs out, and that happens to half of users.So, we are thinking about to switch to redis for session handling, and maybe keep memcache for cache only. My questions are: If we setup redis instances as master-slave, and if master fails, can sentinel promote slave as new master and when old master comes back to life, will it stay as slave or not? Is redis call malloc at startup to allocate part of memory, like memcache or varnish, or it calls malloc for every key inserted? And what are pros and cons of that?

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • Some guest networking and VMware Tools functionality broken with Sprint SmartView on the host

    - by Mads
    Using VMware Workstation 6.5.3 on Vista 64-bit. I started having problems with VMware networking about 6 months ago after upgrades to Sprint SmartView. I did not have problems previously, but I don't know if that is because I was lucky. The main symptoms of the problem when SmartView is installed are: I can no longer drag files from the host to copy them to the guest. When they are dragged, the disallowed cursor (the circle with a slash) shows in the guest. If I try to enable shared folders in the guest while it is running, I will not be able to see the shared files and will be informed that networking is not working. I can still ping guests from the host and I can still access network services via NAT most of the time when connected via my USB broadband adapter. When I configure shared folders so they are "always enabled" (with a mapped drive), I can access files on the via the mapped folders. I can also copy the file on the host and then paste it in the guest, as was suggested in some other threads concerning drag-and-drop problems that I found. The VMware Tools icon is showing in all cases and I don't see any obvious errors in the host's event viewer. If I uninstall SmartView, the problems disappear. If SmartView (current version is 2.28.0082) is reinstalled I will experience the same problems. I have tried uninstalling/reinstalling VMware and SmartView in various ways but it appears tha these problems are consistent when SmartView is installed (not just when it is running or connected, but when it is present on the system). I'm wondering if this is a combination of software (WS 6.5.3, Vista64, and SmartView) that works for other people, which would indicate a problem that is peculiar to my configuration.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Tomcat6 getting crashed at regular intervals installed in Ubuntu

    - by Milesh Rout
    I have installed Tomcat6 in Ubuntu OS and when I run my web application the server gets crashed at regular intervals. I have tried a lot but not getting the solution. I have increased the memory upto 2048mb but still getting such error. Following is the error I am getting. Any help would be really appreciated. org.apache.tomcat.util.http.Parameters processParametersINFO: Invalid chunk starting at byte [312] and ending at byte [312] with a value of [null] ignoredException in thread "Timer-1" Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0" Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2" Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1" Exception in thread "Timer-2" Exception in thread "http-8080-4" Exception in thread "http-8080-8" Exception in thread "http-8080-17" Exception in thread "org.hibernate.cache.StandardQueryCache.data" Exception in thread "org.hibernate.cache.UpdateTimestampsCache.data" Exception in thread "org.hibernate.cache.StandardQueryCache.data" Exception in thread "org.hibernate.cache.StandardQueryCache.data" Exception in thread "org.hibernate.cache.UpdateTimestampsCache.data" Exception in thread "org.hibernate.cache.StandardQueryCache.data" Exception in thread "org.hibernate.cache.StandardQueryCache.data" Exception in thread "org.hibernate.cache.UpdateTimestampsCache.data" Exception in thread "com.safenet.usermgmt.User.data" Exception in thread "http-8080-7" Exception in thread "http-8080-12" Exception in thread "http-8080-16" Exception in thread "http-8080-14" Exception in thread "http-8080-13" Exception in thread "http-8080-15" Exception in thread "http-8080-6" OpenJDK Client VM warning: Exception java.lang.OutOfMemoryError occurred dispatching signal SIGTERM to handler- the VM may need to be forcibly terminated

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • Arch Linux drops me on my school network

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try. Edit: this doesn't happen on my home network. It's a problem that only happens at school.

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Nexus One WiFi connection problems.

    - by sunocky
    I have two new Nexus One for a research project. For the projects I need to keep a server running on the phone. But soon I found out that both the phones have inconsistent WiFi connection problems at my home. It can connect to my WiFi network, but will drop off in a random time. And in order to reconnect to my WiFi, I may need to reboot my router, or the phone will say "obtaining IP address" and then "Unsuccessful". I also own a G1 with firmware version 1.6, it has no such connection problems. Well, to my surprise, the two Nexus One works fine with connecting to the WiFi network at my work place, which is a WEP type WiFi connection. By the way, it is a WPA type connection at my home. Anyone knows what's the problems with the Nexus One? Any suggestions on what should I do if I want to keep the WiFi connection live all the time at my home? Thanks very much!

    Read the article

  • Guest can't access host windows network share

    - by Asteroza
    HI folks, I've recently run into a strange problem after upgrading to VMware player 3. Certain virtual machines (currently an XP and a VIsta VM) seem to have lost the ability to access the host (XP) network shared folders (SMB). Both VM machines are bridged networking, firewall is up. Host firewall is up. Host and guests use DHCP. All OS are workgroup connected. The Vista VM I am not completely sure, but the XP VM did have access to the host's network shared folders after the player upgrade. Then today it wouldn't work, network path can't be found. Now here's the wierd part. The host's network shared folders can be accessed properly by other PC's on the network (and as far as I know, no settings have been changed). The host is pingable from the guests, and name resolution works. The guests can access network shares on other PC's in the network, and access the internet. My Network Places shows the host PC, but double clicking on it takes a long time before it finally times out with an error. Doing a wireshark packet capture, the guest is sending out the protocol negotiation, and the host is sending a response, but after that the guest behaves like it didn't receive anything and is doing TCP retransmissions. Anybody have any idea what could be wrong? Yes I know I can drag and drop files or setup the special VMware shared folders, but I want to access the host just like any other network accessible shared folder. It just seems really odd when any other computer works, just not between the guest and host.

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Configuration for httphandler in classic mode

    - by happyspider
    I have to install an httphandler that needs to run on classic mode. I have created an application on the iis that uses a classic apppool and put the handler assembly there. The vendor gave me a configuration in the deployment document that looks like this: <system.web> <globalization requestEncoding="iso-8859-1" responseEncoding="iso-8859-1" /> <httpModules> </httpModules> <httpHandlers> <add verb="*" path="*" type="ProductName.ProductName, ProductName" /> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="someUnspecificName" path="*" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="None" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> </handlers> </system.webServer> The error I get when requesting a URL on the application is a 404, so I guess the handle is not used at all. Does the configuration look ok for a 64bit system?

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • How to remove Character Style in Word 2003

    - by joe
    I am copying text into a Word document that has its styles protected. Some of the text has a Character style applied to it that I would like to remove. I can change the style and it looks okay visually, but when you click inside the text itself you can see in the style menu that the Character style is still applied. I try to change the style using the style drop down but the Character style won't go away even if I change it to the Paragraph style. Does anyone know what I am talking about and/or have any techniques for removing Character styles? I am looking forward to your responses. Let me know if I need to clarify. Thanks! UPDATE: My issue is that the Paragraph style is changing correctly, but the Character style remains applied even though the text is displayed as if the style is not applied. See http://office.microsoft.com/en-us/word/HA011876141033.aspx if you are not sure what I meen by Character styles. There are four types of styles — paragraph, character, list, and table (list and table styles are new as of Word 2002). However, the majority of styles you'll use are paragraph styles.

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >