Search Results

Search found 27932 results on 1118 pages for 'finite state machine'.

Page 405/1118 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Godaddy one page hosting

    - by liv a
    Disclaimer: not sure this is the right place for this kind of question, sorry in advance, just point me to the right place and I'll move it. In godaddy when paying only for domain, without hosting, they state you can get one page hosting for free but that option only opens their web-builder. I want to create a nicely design landing page, where the content is static.Is there a way to make my domain point to a wordpress one page or self created html one page/ landing page?

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • No signal on monitor after plug it to a linux box

    - by yaroot
    I use my old computer as an NAS, so I remove the monitor after I installed linux on it (disconnect vga cable). I use ssh to control the machine and it works fine. Until some day, after kernel/softare upgrade or messing up some configs, I cannot connect to it through ssh, then I have to plug the monitor back, but the monitor says "No input signal". So I have to restart the computer WITH the monitor connected, and the monitor's back! I think the computer/linux kernel doesn't detect the monitor plug-in event. So how can I start my linux box without a monitor, but when it goes wrong I can still plug my monitor (vga) back and use the console. Edit: just one pci-e video card, has dvi, vga, tv/out (s-video) Edit2: Xorg is not running. I just need the console (CTRL+ALT+F1). The problem is, if the machine booted without a monitor connected, it won't give me a pseudo terminal after I attach the vga cable while it's running. Clearly the monitor is not auto detected as usb device. I'm wondering how to let the monitor auto detected.

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

  • how does ospf control flooding?

    - by iamrohitbanga
    what method is used by ospf protocol to prevent looping of flooded packets for link state advertisements? The packet header does not contain any timestamp. How do the routers recognize that it is the same advertisement that they sent before?

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Struggling to set-up NLB cluster

    - by Chris W
    I'm trying to set up NLB on a couple of Windows 2008 R2 virtual servers running on top of Hyper V R2. The servers each have a single vNIC for LAN access (and a second vNIC for SAN access). I'm setting up the cluster to use Multicast mode. The vNICs are each set to allow MAC spoofing. Essentially I'm finding that i can add SERVER1 as a host and it will pick up and respond to the cluster IP from a remote subnet. If I then 'stop' the node in NLB manager it still responds when I would expect it to stop answering on that IP. If I recreate the cluster and add SERVER2 as the first host, the wizard completes correctly and an IPCONFIG on the server shows that it now has the cluster IP but I can't ping the cluster IP from a remote subnet but I can from another machine on the same subnet. As a final test - with both servers in the cluster, pinging from another machine on the same subnet I still get a response from the cluster IP when both nodes are stopped according to the NLB manager. The two VMs are sat on the same physical blade and are built up exactly the same as they'll be used as SharePoint web front end servers. I'm at a loss as to what could be wrong with the second VM that prevents it taking on the address just as the sole node in the cluster, never mind the strange behaviour of the cluster when I stop/start nodes.

    Read the article

  • Vista WHS Client stopped resolving local names

    - by andrewcr
    I’m running Windows Home Server PP2 in my home, with 3 client computers: two XP and one Vista. I have a router that provides my local DHCP and the server has a static IP address. The other day the Vista machine hung, and on reboot stopped resolving local names. It will show the green home server client icon in the system tray, but if I attempt to log in to the console, I get a “This computer cannot connect to your home server” message. If I ping the server name from the command line, it does not resolve, and gives a “could not find host” message. Oddly enough, if I browse the network, I can see the server, but double clicking on it fails. The other machines on the local network have no problems seeing the server, and the Vista machine has no problems resolving names from the internet, it just can’t see any local machines. I’m aware that I can work around this by adding entries to my HOSTS file (it does work), but I’d like this to work the way it’s “supposed” to. I’m an experienced computer user and developer, but not a networking whiz. Can anyone tell me how local name resolution is supposed to work in my environment and/or suggest ways to troubleshoot this? Thanks, Andy

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • mount nfs subdirectory and still apply parent directory permissions

    - by Christophe Drevet
    A NFS server exports : /export/home computers /export/cont1 computers On the filesystem, there are these permissions : $ ls -al /export/cont1 drwxr-x--- 6 root group1 4096 2010-05-04 10:57 . drwxrwxrwx 5 root root 4096 2010-05-07 14:52 .. drwxrwxrwx 2 root root 4096 2010-05-06 20:33 .snapshot drwxr-xr-x 2 user1 group1 4096 2010-05-04 10:57 user1 drwxr-xr-x 2 user2 group1 4096 2010-05-04 10:57 user2 drwxr-xr-x 2 user3 group1 4096 2010-05-04 10:57 user3 So that user4, which is in not in the group1 can't access this directory and its subdirectories. Now, on its client machine, this user can do : $ sudo mount server:/export/cont1/user3 /mnt/temp and then access the directory without permissions on /export/cont1 : $ id uid=7943(user4) gid=7943(user4) groupes=1189(group4) $ ls -al /mnt/temp/ drwxr-xr-x 3 user3 group1 4096 2010-05-04 10:57 . drwxr-xr-x 7 root root 4096 2010-05-04 11:02 .. -rw-r--r-- 1 user3 group1 6 2010-05-04 10:56 README Is there a way to apply /export/cont1 permissions even if it is not mounted ? The goal is to enable users to mount /home/user3 and only access it if they can access /export/cont1 on the nfs server. Said in another way : how can I allow a machine to mount /export/cont1/user3 and still don't allow user4 to access it. Maybe NFSv4 and Kerberos can help ?

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Can I setup a test server and then transfer everything to a diff. production server?

    - by Justin
    Hello, I am going to be setting up a "real" server, but it's not being shipped for another week. I was planning on setting up most of the server's functionality using an extra workstation I have. I wanted to set-up Windows Server 2003 or 2008, IIS, Terminal Services, Firewall, and Antivirus on this regular machine. I'd also be installing software like Winzip and VMWare that'll be used on the server. I can't ghost the machine, as far as I've done in the past, because the motherboard/cpu/etc. will all be different. Is there any way to export all of the "server settings" or something like that so I can move everything from test to production? Is there any software out there that does something similar to this? Some things I'm going to have to wait on such as setting up the file server completely in its raid configuration, but I'd like to get the simple server stuff and network setup out of the way. Has anyone done this before? Do I need software, open-source or not, to do this? Or maybe there's a way to export all the server settings in some way? Thanks in advance! Justin

    Read the article

  • Why won't vyatta allow SMTP through my firewall?

    - by Solignis
    I am setting up a vyatta router on VMware ESXi, But I see to have hit a major snag, I could not get my firewall and NAT to work correctly. I am not sure what was wrong with NAT but it "seems" to be working now. But the firewall is not allowing traffic from my WAN interface (eth0) to my LAN (eth1). I can confirm its the firewall because I disabled all firewall rules and everything worked with just NAT. If put the firewalls (WAN and LAN) back in place nothing can get through to port 25. I am not really sure what the issue could be I am using pretty basic firewall rules, I wrote the rules while looking at the vyatta docs so unless there is something odd with the documentation they "should" be working. Here is my NAT rules so far; vyatta@gateway# show service nat rule 20 { description "Zimbra SNAT #1" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.17 } type source } rule 21 { description "Zimbra SMTP #1" destination { address 74.XXX.XXX.XXX port 25 } inbound-interface eth0 inside-address { address 10.0.0.17 } protocol tcp type destination } rule 100 { description "Default LAN -> WAN" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.0/24 } type source } Then here is my firewall rules, this is where I believe the problem is. vyatta@gateway# show firewall all-ping enable broadcast-ping disable conntrack-expect-table-size 4096 conntrack-hash-size 4096 conntrack-table-size 32768 conntrack-tcp-loose enable ipv6-receive-redirects disable ipv6-src-route disable ip-src-route disable log-martians enable name LAN_in { rule 100 { action accept description "Default LAN -> any" protocol all source { address 10.0.0.0/24 } } } name LAN_out { } name LOCAL { rule 100 { action accept state { established enable } } } name WAN_in { rule 20 { action accept description "Allow SMTP connections to MX01" destination { address 74.XXX.XXX.XXX port 25 } protocol tcp } rule 100 { action accept description "Allow established connections back through" state { established enable } } } name WAN_out { } receive-redirects disable send-redirects enable source-validation disable syn-cookies enable SIDENOTE To test for open ports I have using this website, http://www.yougetsignal.com/tools/open-ports/, it showed port 25 as open without the firewall rules and closed with the firewall rules. UPDATE Just to see if the firewall was working properly I made a rule to block SSH from the WAN interface. When I checked for port 22 on my primary WAN address it said it was still open even though I outright blocked the port. Here is the rule I used; rule 21 { action reject destination { address 74.219.80.163 port 22 } protocol tcp } So now I am convinced either I am doing something wrong or the firewall is not working like it should.

    Read the article

  • Why would Windows Task Scheduler spawn multiple instances of the same task that run into each other?

    - by swagner88
    Overview: I use Windows Task Scheduler to run automated tasks. Occasionally I will see that randomly a task has failed to perform its duties. When I check Task Scheduler to see what has occurred in the history log, I see that for some reason, when the tasks are triggered at their schedules, they are spawning several instances of themselves simultaneously which turns into a train wreck for the task and it either kills the other instances and tries to run the "first" one, or it just does not run at all as it believes another instance of itself is already running. Sometimes this occurs in the same tasks and then occasionally it happens with others. The fix is just to end all instances and start the task manually. Question: Why would one single task with one single schedule decide to spawn multiple instance of itself simultaneously? Note: I've got a separate user account set to run the tasks instead of myself. That user is indeed an admin on the machine that runs the tasks and the tasks are set to tun whether or not the user is logged on. Also, the machine is windows server 08 R2.

    Read the article

  • Cannot connect to Solaris Server when Oracle GoldenGate process uses the port

    - by Abdallah Ghrb
    I'm trying to test the Oracle goldengate to replicate data between two databases on the same server. So I installed two databases and two goldengate homes in the same machine. The goldengate processes are started from each home and they are responsible for the replication : Process from home 1 configured on port 7809 & process for home 2 configured on port 7810. For a successful replication, processes started from goldengate home 1 should communicate with processes started from goldengate home 2. But for some reasons, this is not happening. The goldengate log file has the following error : OGG-01223 Oracle GoldenGate Capture for Oracle, exthrr.prm: TCP/IP error 131 (Connection reset by peer). "Googling" for this error, it said that the connection occurred but the host terminated it. Tried to telnet the machine with the used port and it gave the following error: bash-3.00$ telnet 10.10.3.124 7810 Trying 10.10.3.124... Connected to 10.10.3.124. Escape character is '^]'. Connection to 10.10.3.124 closed by foreign host. Here the communication occurs but for only around 3 seconds then it is closed by the host, which is the same explanation of the above error in the goldengate log file. I tried to change the port but still the same error. The problem is happening only when the goldengate process is using the port. When other process is using the same port, I can telnet successfully.

    Read the article

  • Successful login with iscsiadm on target still doesn't create block device

    - by Halfgaar
    I've set up an experiment to test iscsitarget and initiator, which at some point worked. Later, I turned the setup back on and much to my dismay, the initiator machine stopped making block devices for its successful logins. As far as I know, I haven't changed anything on either machine. Some details: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260] Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260]: successful Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260]: successful Sessions: # iscsiadm -m session tcp: [3] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun1 tcp: [4] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun2 Netstat: # netstat -n -p|grep 3260 tcp 0 0 10.0.0.2:48719 10.0.0.1:3260 ESTABLISHED 1078/iscsid tcp 0 0 10.0.0.2:48718 10.0.0.1:3260 ESTABLISHED 1078/iscsid /var/log/syslog doesn't give errors: Jan 27 11:41:49 vmnode001 kernel: [ 378.041749] scsi7 : iSCSI Initiator over TCP/IP Jan 27 11:41:49 vmnode001 kernel: [ 378.044180] scsi8 : iSCSI Initiator over TCP/IP lsscsi doesn't show my devices: [0:0:1:0] cd/dvd TSSTcorp DVD-ROM TS-L333A D100 /dev/sr0 [4:0:0:0] disk ATA Hitachi HUA72105 A74A - [4:0:1:0] disk ATA Hitachi HUA72105 A74A - [4:1:0:0] disk Dell VIRTUAL DISK 1028 /dev/sda And there are no block devices in /dev for it: # ls -1 /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 I tried loading all scsi kernel modules I could find, but that doesn't seem to be the problem. I reall don't get this; it used to work. I found people with similar problems (here and here) but no solution. Initiator is Debian Sqeeuze (testing), target is Debian Lenny (stable). iscsitarget is 0.4.16+svn162-3.1+lenny1, open-iscsi (initiator) is 2.0.871.3-2squeeze1. Target kernel: 2.6.26-2-amd64, initiator kernel: 2.6.32-5-amd64

    Read the article

  • Cannot access drive in Windows 7 after scandisk lockup, but can in safe mode....

    - by Matt Thompson
    I ran scandisk on my external USB drive due to the inability to delete a few files. Windows asked me if I wanted to unmount the drive before the scan, warning me that it would be unusable until the scan was finished, and I said yes. During the scan, my machine locked up, and I was forced to reboot the machine. When it came up, I was unable to access the drive, getting an error that "L:is not accessible, access is denied". Comupter Management sees the drive, and has the proper amount of disk space filled. I booted into safe mode, and can access the drive with no problems, and I noticed that in explorer, all the folders have locks on them. I booted back into windows, but still could not access the drive, getting the same error as above. Hovever, if I right click on the drive, select properties, and go to Customize, in the folder pictures ares, I select Choose File, and a window open up, that shows the root of the directory, with all the folder able to be accessed, but again, the icon is the folder icon with a lock on it. I can even copy files from the drive to another. So, the files are not gone, windows can obviously access the drive no matter what it thinks, so there has to be a problem with the flag windows put on the drive when it ran the original scan that failed. I was able to run a scan both in safe mode with no problems, and in windows. In windows, I received the cannot access error the first time I run scan disk on it, but if I try again, it works fine. Any ideas on how to clear the flag that windows set, so I can access the drive normally again?

    Read the article

  • How to run commands when mac is idle and when it resumes

    - by tig
    I want to run script when my mac is idle (for example after 5 minutes or screen saver start time is also ok) and when I resume it from idle state. I know that I can write daemon using NSDistributedNotificationCenter and com.apple.screenIsLocked and com.apple.screenIsUnlocked, but I hope that there is already solution without creating new daemon. I need this to for example turn on/off speed limit for Transmission (as it is sometimes hard to work when hashing/downloading on full speed).

    Read the article

  • Incorrect directory permissions with OpenSSH on Cygwin on Windows Server 2008 SP2

    - by Davy Brion
    I ran into a weird directory permission problem when logged in to a Win2008SP2 (not R2) server through SSH. When I open a local cygwin shell on the server, i can do this: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config myUser@myServer /cygdrive/c/Windows/System32/inetsrv/config $ I have no issues accessing the 'config' directory when using a local cygwin shell. 'myUser' has all necessary permissions to access the directory as well. In fact, 'myUser' is a local administrator on the machine. Listing the permissions of the config folder through the local cygwin shell shows the following output: 4 drwx------+ 1 SYSTEM SYSTEM 0 Aug 2 09:38 config But when I log into the server with a SSH client (in this case Putty), i run into the following problem: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config -bash: cd: config: Permission denied It also doesn't list the proper permissions through SSH: 0 drwxr-x--- 1 ???????? ???????? 0 Aug 2 09:38 config When I look at the running processes on the server with Task Manager (with a remote desktop connection), it shows that all bash.exe processes are running under the 'myUser' account, so I don't understand why I can't access that particular directory through SSH but have no problems accessing it in a local cygwin shell. I'm using OpenSSH 5.9p1-1. I'm not sure what the Cygwin version is... I used the latest setup.exe (version 2.738) of Cygwin, but I can't seem the find any other Cygwin-related version number. I doubt that it's related to SSH/Cygwin though, because when I connect from the Win2008SP2 server to my local Win7 machine through SSH (using the same OpenSSH/Cygwin versions) I can access the /cygdrive/c/Windows/System32/inetsrv/config folder without issues. Does anyone have an idea on what the issue could be?

    Read the article

  • Migrateing to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows2k3 Domain Controllers. I am trying to replace them with Windows 2008 R2. The Win2k3 servers are DC01 and DC02. The Windows2k8 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller That Runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I dcpromo'd DC1 using the advanced option and added it successfully to my exisiting domain. Roles are GC, DNS and Active Directory Domain Services.I transferred The PDC, RID pool manager and Infrastructure master FSMO to the new DC.(DC1) The Schema master and Domain naming master are still on the old DC (DC01). The first issue I'm encountering is when i dcpromo the second DC (DC2) and select "Replicate data over the network from and existing domain controller" I select the new DC to replicate from (DC1) I get the following error: "Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request." Is this because the Schema master and Domain naming master roles are still on the old DC (DC01)? And if so, if I transfer Schema master and Domain naming master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way. Thanks in Advance -Chris

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • VirtualBox communication from Linux to/from Windows 7

    - by J. Otto Tennant
    VirtualBox is running in Windows 7 as the host. VirtualBox has the two modifications (one is called Guest Additions; don't remember the other). The Virtual machine has "bridged" networking selected. I have SAMBA set up (now, the problem may be here; it has been three or four years since I last did this) on the Linux guest machine. Neither guest nor host sees the other. From the Windows 7 command prompt, the IP address of the Linux guest pings. The IP address of another computer (a separate Windows 7 on the wireless network) pings from the Linux guest. (I have no idea what IP address the Windows 7 host itself has. The output of "netstat" does not seem to be useful.) So, it seem to me that something should be working. The only workgroup on the LAN is inventively named WORKGROUP. SMB4K should be seeing something. There must be a simple setup step that I am missing. (FWIW, there are two processes running smbd, and no process is running nmbd. YaST says that nmbd is set to run. I am not sure what this means.)

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >