Search Results

Search found 21352 results on 855 pages for 'bit shift'.

Page 640/855 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Why is my mono/XSP site loading slow?

    - by acidzombie24
    I have two sites on the same server. One is loading perfectly and incredibly fast. The other is an equally complex site except a bit less javascript and 0 images. Its taking several seconds to load and there is a 1 in 3 chance that i get a Http500 error. WTF I grabbed the lastest 2.6.? version of mono, mod_mono and xsp (libgdiplus-2.6.7, xsp-2.6.5, mod_mono-2.6.3 and mono 2.6.7) This is whats in apache error.log [Mon Jan 03 19:33:40 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:33:40 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 this is the page error Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch mod_mono/2.6.3 Server

    Read the article

  • Wireless connection drops when wired computer starts a game.

    - by Skadlig
    Starting this week I have had a strange problem on my network. Some background of my setup: Internet is provided by a adsl-modem. A D-link Dir-600 router is hooked up to to adsl-modem. My computer is hooked up to the router using a cat-5 cable. My wife's computer is hooked up using a wireless usb dongle, TP-Link TL-WN821N. Both computers use windows 7 64-bit home premium. Up until this week everything was normal, we could for instance play Dungeons & Dragons Online together without any network issues. Now every time I start DDO or any other network game, for instance L4D, the whole wireless network drops. I have confirmed that it's not just her computer using an Samsung Galaxy Spica android phone. Shutting down the game on my end restores the wireless connection automagicly. My wife can start DDO without the net dropping but if I plug in a wireless network card in my computer and start up the game the connection drops. So it seems like something my computer, and my computer only, does when starting a game makes the wireless connection write a sad note and kill it self but for the life of me I can't figure out what that might be. I could hook her computer up using cat-5 but I would prefer not to do that. Does anyone have any suggestions as to what the problem might be, what I can do to fix it or what I should do to get more data regarding what is happening?

    Read the article

  • Bridging my laptop's wireless and wired adaptors

    - by stacey.richards
    I would like to be able to connect a desktop computer that does not have a wireless adapter to my wireless network. I could just run a network cable from my ADSL/wireless router to the desktop computer but sometimes this is not practical. What I would really like to do is bridge my laptop's wireless and wired adapters in such a way that I can run a network cable from my laptop to a switch and another network cable from the switch to a desktop computer so that the desktop computer can access the Internet through my ADSL/wireless router via my latop: +--------------------+ |ADSL/wireless router| +--------------------+ | +-------------------------+ |laptop's wireless adaptor| | | |laptop's wired adaptor | +-------------------------+ | +------+ |switch| +------+ | +-----------------------+ |desktop's wired adapter| +-----------------------+ A bit of Googling suggests that I can do this by bridging my laptop's wireless and wired adapters. In Windows XP's Network Connections I select both the Local Area Connection and the Wireless Network Connection, right click and select Bridge Connections. From what I gather, this (layer 2?) bridge will examine the MAC address of traffic coming from the wireless network and pass it through to the wired network if it suspects that a network adapter with that MAC address may be on the wired side, and vice-versa. If this is the case, I would assume that when the desktop computer attempts to get an IP address from a DHCP server (which is running on the ADSL/wireless router), it would send a DHCP broadcast packet which would pass through the laptop's bridge to the router and the reply would return through the laptop's bridge back to the desktop. This doesn't happen. With some more Googling I find some instruction how this can be done with Linux. I reboot to Ubuntu 9.10 and type the following: sudo apt-get install bridge-utils sudo brctl addbr br0 sudo brctl addif br0 wlan0 sudo brctl addif br0 eth0 sudo ipconfig wlan0 0.0.0.0 sudo ipconfig eth0 0.0.0.0 Once again, the desktop cannot reach the ADSL/wireless router. I suspect that I'm missing some simple important step. Can anyone shed some light on this for me?

    Read the article

  • Openldap startup problems after upgrade

    - by Craig Efrein
    I am trying to syncrhonize a ldap slave and master server. The master server is using openldap 2.3.43-12 and the slave server is using openldap 2.4.23. I copied over the files in /var/lib/ldap, started the server and got this error: Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb(dc=myserver,dc=fr): Program version 4.7 doesn't match environment version 4.4 Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_open: database "dc=myserver,dc=fr" cannot be opened, err -30971. Restore from backup! Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb(dc=myserver,dc=fr): txn_checkpoint interface requires an environment configured for the transaction subsystem Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_close: database "dc=myserver,dc=fr": txn_checkpoint failed: Invalid argument (22). Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: backend_startup_one (type=bdb, suffix="dc=myserver,dc=fr"): bi_db_open failed! (-30971) Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_close: database "dc=myserver,dc=fr": alock_close failed I have used the db_upgrade command to upgrade the database files on the new slave server, but I still get the same error when starting slapd. The master server is Centos 5.5 32bit & openldap 2.3.43-12 The slave server is Centos 6.3 64 bit & openldap 2.4.23 Everything was installed using yum. What is the proper method to synchronize database files from an ldap master server and slave server when the slave server is more recent then the master? I have followed the suggestion from 84104, but I am getting an error on the slave Here is the error on the slave: Oct 23 18:28:30 xe-ldap-slave1 slapd[1415]: slap_client_connect: URI=ldaps://ldap0.lan.myserver.com:636 DN="cn=syncuser,dc=myserver,dc=fr" ldap_sasl_bind_s failed (-1) Oct 23 18:28:30 xe-ldap-slave1 slapd[1415]: do_syncrepl: rid=003 rc -1 retrying Here is the error on the master: Oct 23 18:29:30 ldap0 slapd[15265]: conn=201 fd=35 ACCEPT from IP=192.168.150.100:47690 (IP=0.0.0.0:636) Oct 23 18:29:30 ldap0 slapd[15265]: conn=201 fd=35 closed (TLS negotiation failure) I can do an ldap search on the master just fine with the user configured for synchronization from the new slave server. ldapsearch -LLL -x -H ldaps://192.168.150.99:636 -x -W -b dc=myserver,dc=fr-D"cn=syncuser,dc=myserver,dc=fr"

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • iperf max udp multicast performance peaking at 10Mbit/s?

    - by Tom Frey
    I'm trying to test UDP multicast throughput via iperf but it seems like it's not sending more than 10Mbit/s from my dev machine: C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [156] local 192.168.1.99 port 49693 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0- 1.0 sec 1.22 MBytes 10.2 Mbits/sec [156] 1.0- 2.0 sec 1.14 MBytes 9.57 Mbits/sec [156] 2.0- 3.0 sec 1.14 MBytes 9.55 Mbits/sec [156] 3.0- 4.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 4.0- 5.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 5.0- 6.0 sec 1.15 MBytes 9.62 Mbits/sec [156] 6.0- 7.0 sec 1.14 MBytes 9.53 Mbits/sec When I run it on another server, I'm getting ~80Mbit/s which is quite a bit better but still not anywhere near the 1Gbps limits that I should be getting? C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [180] local 10.0.101.102 port 51559 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [180] 0.0- 1.0 sec 8.60 MBytes 72.1 Mbits/sec [180] 1.0- 2.0 sec 8.73 MBytes 73.2 Mbits/sec [180] 2.0- 3.0 sec 8.76 MBytes 73.5 Mbits/sec [180] 3.0- 4.0 sec 9.58 MBytes 80.3 Mbits/sec [180] 4.0- 5.0 sec 9.95 MBytes 83.4 Mbits/sec [180] 5.0- 6.0 sec 10.5 MBytes 87.9 Mbits/sec [180] 6.0- 7.0 sec 10.9 MBytes 91.1 Mbits/sec [180] 7.0- 8.0 sec 11.2 MBytes 94.0 Mbits/sec Anybody has any idea why this is not achieving close to link limits (1Gbps)? Thanks, Tom

    Read the article

  • What is easiest no fail way to publish asp.net app?

    - by Maestro1024
    What is easiest no fail way to publish asp.net app? Sorry a bit of an open ended question but I am having issues deploying an asp.net report project and any solution to get the site up is fine. I am running Win7/SQL 2008 and want to publish a asp.net report site that I created in VS 2008. Website launches when I run in debug in Visual studio but I want to publish the site so that it can be seen on the LAN. I published the files off to a folder and started up the IIS manager and added a new site and pointed to that folder. Set the permission on the folder to share to everyone. However when I go to the DNS name I put in for the website it does not launch. Any ideas on this? I see websites out there talking about a web sharing tab on the folder properties but I do not see that when I go to folders. Why might that be? Another avenue I have not pursued yet is publishing directly to a website. Has anyone tried that? Is that better or worse than publishing to filesystem?

    Read the article

  • Cannot set video resolution above 640x480 after installing Windows XP SP2

    - by waanders
    I've installed Windows XP SP2 on a computer (there was not SP at all). Now the display settings are set back to 640x480 and 4 bits colors. And I can't change it, it's the only option in Settings tab of the Display dialog of Windows. The screen look awful now, how can I solve this problem? UPDATE: Seems to be a problem with the video driver (thanks @Karan and @Hennes). I did run Speccy (PC-Wizard freezes the computer) and this is a part of the log file: Summary Operating System Microsoft Windows XP Professional 32-bit SP3 CPU Intel Celeron Willamette 0.18um Technology RAM 512 MB DDR @ 133MHz (2.5-3-3-6) Motherboard COMPAQ 0838h (FC-478) Graphics Standard Monitor (640x480@1Hz) Hard Drives 19.0GB Maxtor 2B020H1 (PATA) Optical Drives No optical disk drives detected Audio No audio card detected ... Graphics Monitor Name Standard Monitor on Current Resolution 640x480 pixels Work Resolution 640x450 pixels State enabled, primary Monitor Width 640 Monitor Height 480 Monitor BPP 4 bits per pixel Monitor Frequency 1 Hz Device \\.\DISPLAY1 OpenGL Version 1.1.0 Vendor Microsoft Corporation Renderer GDI Generic GLU Version 1.2.2.0 Microsoft Corporation Values GL_MAX_LIGHTS 8 GL_MAX_TEXTURE_SIZE 1024 GL_MAX_TEXTURE_STACK_DEPTH 10 GL Extensions GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture GL_EXT_bgra

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • CloneZilla Broke My System? Ubunut Installation Lost After Running CloneZilla

    - by nicorellius
    I just read through this post and tried to get my installation back using this answer to no avail. What happened to me is this: I spent an hour or more reading through the CloneZilla docs. I thought I was ready to test it out so I burned the disc with the ISO image on it and ran it. The system I used was Ubuntu 10.04, 32-bit. Everything seemed to go fine. I made a clone of my first partition and copied it to my second partition. I followed the instructions, removed the disc and rebooted my system. At this point, I would expect to have two bootable Linux installations, identical to one another. However, upon booting, I got this error message: error: no such device: 4cf1a6ef-xxxx-xxxx-xxxx-4e3a3ce92bcd error: file not found I booted from a Live Ubuntu disc and was able to see my to partitions: 4cf1(1) and 4cf1(2) (abbreviated, because the volumes have long numbers to identify them). The 50 GB partition, on which the original Ubuntu installation sits is the number and the second partition (175 GB) is the same number with an "_" at the end. I could browse the disc partitions and see the files, but I'm not sure what to do next. I know there is a way to restore my grub loader and actually boot either of these installations, but my Linux know-how is limited. Can I edit the boot loader file to fix this problem? The only clue I have is CloneZilla said something about making a new GRUB but I thought it was going to basically modify it so I could boot either installation. Not sure what happened. I am going to look through this post for the time being to see if I can learn anything to help my problem. But I thought that, since this happened as a result of using CloneZilla, it may be a unique question for this board.

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • How can I get VirtualBox to run at 1366x768?

    - by Joe White
    I'm trying to run Windows 8 in VirtualBox. My laptop's display is exactly 1366x768. Windows 8 disables some of its features if the resolution is less than 1366x768, so I need to run the guest OS fullscreen. The problem is, VirtualBox refuses to run the guest at 1366x768. When VirtualBox is "fullscreen", the guest is only 1360x768 -- six pixels too narrow. So there's a three-pixel black bar at the left and right sides of the display. This user had the same problem, but the accepted answer is "install the Guest Additions", which I've already done; that got me to 1360, but not to 1366. According to the VirtualBox ticket tracker, there used to be a bug where the guest's screen width would be rounded down to the nearest multiple of 8, but they claim to have fixed the bug in version 3.2.12. I'm using version 4.1.18 and seeing the same problem they claim to have fixed, so either they broke it again, they were wrong about ever having fixed it, or my problem is something else entirely. This answer suggested giving the VM 128MB of video memory, and claimed no problems getting 1366x768 afterward. When I created the VM, its display memory was already defaulted to 128 MB. I tried increasing it to 256MB, but with no effect: the guest is still six pixels too narrow. My host OS is Windows 7 64-bit, and I'm running VirtualBox 4.1.18. How can I get VirtualBox to run my guest OS fullscreen at my display's native resolution of 1366x768?

    Read the article

  • Connecting remotely to an SQL server inside a LAN

    - by vondip
    Hello everyone, I am using SQL server 2008 inside my home lan. I've configured it to accept remote connections and I can now connect to the server from other pcs inside the lan. The problems rises when I try connecting to the server from a computer outside of my home lan. I've disabled my router's firewall and I've configured a virtual server on port 1433 forwarding to the correct lan ip. What's wrong? why is it not working? Thank you very much for your help~! Edit: This is the error I keep getting: A network related or instance specific error occured while establishing connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that the SQL SERVER is configued to allow remote connections. (provider : Sql network interfaces, error: 25- Connection string is not valid) OK these are my router's details: edimax br-6204wg I am not sure how I am supposed to browse google.com. can you be a bit more specific?

    Read the article

  • When connecting to PPTP Centos via Windows 7 VPN, I get error 2147943625

    - by Charlie Dyason
    The remote computer refused the network connection. phrase has been my arch enemy for the past week now I recently "bought" a VPS server, I gave up trying to configure it with OpenVPN, all the issues were making me lose my mind, so I tried the easier way with pptp, but i figure, both are leading to a dead end... I followed this post (many others too but this is the unlucky one), http://blog.secaserver.com/2011/10/install-vpn-pptp-server-centos-6/ and it all goes well with the setup, however, I run into this error when connecting to the VPN in Windows 7 here is a pic of the error: Image So I do not know what I have done wrong... When connecting, Code: Select all netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came I tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - iptables: # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [63:8868] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Fri Nov 1 18:14:53 2013 # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *nat : PREROUTING ACCEPT [96:12732] : POSTROUTING ACCEPT [0:0] : OUTPUT ACCEPT [31:2179] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Nov 1 18:14:53 2013 options.pptpd the only changes was the require-mppe # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 require-mppe # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} I check the iptables, everything is normal, all INPUTs, etc are before rejects, username and password I also checked in chap-secrets file, I am really puzzled...

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • Windows Firewall Software to Filter Transit Traffic

    - by soonts
    I need to test my networking code for Nintendo Wii under the conditions when some specific Internet server is not available. Wii is connected to my PC with crossover ethernet cable. PC has 2 NICs. PC is connected to hardware router with ethernet cable. The hardware router serves as NAT and has an internet connected to its uplink. I set the Wii to be in the same lan as PC by using Windows XP Network bridge. I can observe the WII network traffic using e.g. Wireshark sniffer. Is there a software firewall that can selectively filter out transit traffic? (e.g. block outgoing TCP connections to 123.45.67.89 to port 443) I tried Outpost Pro 2009 and Comodo. Outpost firewall blocks all transit traffic with it's implicit "block transit packet" rule. If the transit traffic is explicitly allowed by creating the system-wide low level rule, then it's allowed completely and no other filter can selectively block it. Comodo firewall only process rules when the packet has localhost's IP as either source or destination, allowing the rest of the traffic. Any ideas? Thanks in advance! P.S. Platform is Windows XP 32 bit, no other OSes is allowed, Windows ICS (Internet Connection Sharing) doesnt work since the Wii is unable to connect, becides I don't like the idea of adding one more level of NAT.

    Read the article

  • How to Load Balance 2 Internet Connections on a Windows 7 machine?

    - by Jimmy Chandra
    It's sort of related to this particular question, but that one is on Mac. I am looking for similar solution on Windows 7. I have 2 network connections: (Connection A) Wireless terminal connecting to ISP A (3G / EVDO internet provider) (Connection B) Broadband wired connection connecting to ISP B (Cable internet provider) Both has access to the internet. When I try connecting to a website and checking the networking tab on my Task Manager, I only see the network traffic being routed to only Connection A. Is there a way to make the computer to utilize both network (in a sense using all the bandwidth available from both the Cable ISP and the 3G / EVDO ISP) at the same time? If so, what do I need to do to set this up ... on Windows 7? Here is a bit more info on my network connections (ipconfig /all): PPP adapter Wireless Terminal: IPv4: aa.bb.ccc.ddd(preferred) Subnet mask: 255.255.255.255 Default Gateway: 0.0.0.0 DNS: aa.ee.f.ggg aa.ee.f.hhh Primary Wins: jjj.ii.k.l Secondary Wins: jjj.ii.k.m Ethernet adapter LAN: IPv4: 192.168.1.100 (connected to a router by wired that itself connect to a cable modem) subnet mask: 255.255.255.0 Default gateway: 192.168.1.1 (the wireless router) DHCP: 192.168.1.1 (the wireless router) DNS: xxx.yy.zz.ww rr.sss.t.uuu For my own privacy, I don't believe the actual number matters, the patterns are representative of the ip numbering scheme...

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • ps aux as non-root doesn't show all processes

    - by JMW
    hi, i'm using an ubuntu 10.04 server... when i run ps aux as root i see all processes when i run ps aux as nonroot i see JUST the processes of the current user after a bit of research i found the following solution: root@m85:~# ls -al /proc/ total 4 dr-xr-xr-x 122 root root 0 2010-12-23 14:08 . drwxr-xr-x 22 root root 4096 2010-12-23 13:30 .. dr-x------ 6 root root 0 2010-12-23 14:08 1 dr-x------ 6 root root 0 2010-12-23 14:08 10 dr-x------ 6 root root 0 2010-12-23 14:08 1212 dr-x------ 6 root root 0 2010-12-23 14:08 1227 dr-x------ 6 root root 0 2010-12-23 14:08 1242 dr-x------ 6 zabbix zabbix 0 2010-12-24 23:52 12747 [...] my first idea was, that it got mounted in a weird way: /etc/fstab is ok and it doesn't seem to be mounted in an weird way... my second idea was, that there might be a rootkit: but it's not a rootkit... rkhunter tells me, that there is no rootkit installed... i don't know if it is since the machine got installed or came with an update. i've just installed zabbix-agent on the machine and realized, that it didn't work properly... What could have caused such strange permissions (500) and how can i set it back to an normal level (555) ? Crazy, i've never seen something like that... thanks in advance for any help and merry christmas :) see you

    Read the article

  • What options to use for Accurate bacula backup?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Cannot connect to remote mail server for sending emails in ASP.NET

    - by Dave
    I want to migrate a web application from a Windows Server 2003 to a Windows Server 2008 R2. All works fine except sending emails from the application. If I configure the application to use the smtp server on "localhost" it works, but changing it to the "real" host name (e.g. mail.example.org) no mail is sent. The error message says, that the remote server needs a secure connection or smtp authentication. But since it works when using "localhost" instead of the host name I doubt that this is the problem. Also it's unlikely a problem with the mail server, I also tried it with another one. So for me it seems like the firewall is blocking the outgoing connection to the mail server. I tried to open port 25, but it still did not work. Maybe I just did it the wrong way. Update: For clarifying my setup: I have a Windows Server 2008 R2 with hMailServer installed (set up for some of the hosted domains) For the website I'm talking about I need to use an external mail server (totally different hosting provider) Apparently I was a bit off the track. It seems like it works when using connecting to the local mail server either with the host name "localhost" or "mail.somedomain.com" (while somedomain.com is set up in my mail server). But when using the host name of the external mail server ("mail.externaldomain.com") it seems like it tries to connect to the local server again, although this domain is not set up in the mail server. Thanks to Evan Anderson for the tip to use telnet - why I have not thought of it myself?... :-) Note, the website www.externaldomain.com is hosted on my server but the DNS entries are maintained by the other hosting provider. "externaldomain.com" is the only entry which points to my server all other records (MX, subdomains) are pointing to the other server. So I think the question is now, how do i bring my server to connect to the external mailserver. Do I have to configure this in my mail server or is it a windows server thing?

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >