Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 1001/2662 | < Previous Page | 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008  | Next Page >

  • squid will not listen on specific IP

    - by Chris
    I have a public subnet with squid installed. Let's say my subnet is from 1.2.3.2 to 1.2.4.254. When I set the following: http_port 50000 http_port 1.2.3.2:50000 it is working. But when I set: http_port 1.2.4.37:50000 it is not working. It is only working with my first public IP from my subnet. The error I get when it is not working is the following: commBind: Cannot bind socket FD 10 to 1.2.4.37:50000: (99) Cannot assign requested address What is wrong with my configuration? @Zoredache I have a dedicated server and the whole subnet is defined as IP Range. When I use only the port or the first IP and doing netstat -ntlp | grep 46555 I get the following: tcp 0 0 0.0.0.0:50000 0.0.0.0:* LISTEN 25532/(squid ) When I use my specific IP, then the netstat query will show nothing, as the server does not start. I don't want squid to listen on all or the first IP, it is just a specific one it should listen to.

    Read the article

  • Mac OS X: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my Mac. I've read this, and I am aware of the whole Macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real jerk if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with the Internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail applications, but please include that in your answer for anyone who might be interested.

    Read the article

  • Freebsd Secondary Group not allowing folder deletion

    - by Jarrod Juleff
    TLDR: I have a user that is a member to a group as a secondary group. This user can delete files with 664 perms as a secondary user, but not directories with perms of 775. Details: I have a user. Lets call him ftpuser. I use him to upload and download files to my devbox. The user's primary group is "ftp" and is also in the group "www" as a secondary group. My web server runs as user www and group www, and I have proftpd (running as www and www) configured to drop all files into the needed directories as www and www (for file ownership) and perms 664 on files and 775 on directories. My problem is (tried with 2 ftp clients) the ftp client can delete the files, but not the folders. Filezilla returns 550 permission denied. The owner only can delete flag is not set, and I've triple checked the permissions and they are indeed 775. Its driving me nuts to have to log into my server to manually delete folders every time. Some of the folders and files are created by 1 of my php scripts, but the permissions are getting set properly when I check the files' properties. Directory and file creation works phenomenal. Can delete files, just not directories. Freebsd 9.0 Running in VirtualBox (32bit all the way around) Proftpd (running as www and www) as ftp server (tried using both dreamweaver and filezilla as the clients) Basic amp setup (apache,mysql,and php).

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for [email protected] (in reply to RCPT TO command))

    Read the article

  • Can't install SB750 RAID drivers in Windows 7 for two additional storage drives

    - by jf46
    Mobo: ASUS M4A79XTD, 790X/SB750 OS: Windows 7 x64 I currently have SATA 1-4 set to RAID and SATA 5-6 set to IDE. I have an SSD connected to SATA5 with Windows 7 installed on it, and that works fine. I also have configured a RAID 1 array of two 1TB HDDs, connected to SATA 1 and 2. These don't show up in Windows, and I'm having trouble getting the RAID driver installed. I even tried booting from the Windows DVD and repairing or installing Windows, but when I navigated to the relevant .sys files on my motherboard's driver CD, Windows setup told me that the files in question weren't relevant to my hardware. To be clear: I'm not trying to install Windows on a RAID. I have Windows installed on a separate disk on a separate SATA controller. I just want to get the SB750 RAID drivers installed so that the Windows disk utility can see my RAID 1 array, which is composed of two other disks. Do I need to wipe my SSD and reinstall Windows to get the RAID driver installed? That seems kind of ridiculous, and given what I described above, I'm not even sure it would work. Any help or guidance would be appreciated - thanks! Edit: Also, I've copied the driver files from the mobo CD into system32 and rebooted, no luck. Then I changed HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorV\Start from 3 to 0, and that didn't work either.

    Read the article

  • security update in centos, which way is it?

    - by user119720
    Recently something have been bothered with my mind regarding my linux CentOS box.My client have been asking to set up a CentOS machine in their environment which works as server. One of their requirement is to make sure that the set up is to be as secure as possible. Mostly have been covered except the security update inside CentOS. So my question are as follows: 1.. How to apply the latest security,patches or bug fixes in CentOS? When doing some research, I've been told that we can update the security of CentOS by running yum install yum-security but after install this plug in,seems there is no output for this method.Its like this command is not working anymore. 2.. Can i update the security patches through rpm packages? I couldn't find any site that can download the security patches,enhancement or bug fixes for CentOS.But I know that CentOS have been releasing these update through their CentOS announcement here It just it lack of documentation on how to apply these update into my CentOS installation. For now the only way that I know is to run yum update I am hoping that someone can help me to clarify these matter.Thanks.

    Read the article

  • IIS FTP error: 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Problems installing Adobe Premiere Elements 8 Content features

    - by Walt Maken
    I recently purchased Adobe Photoshop & Premiere Elements 8. Photoshop Elements 8 installed fine, along with the additional material available via internet download. Premiere Elements 8 installed ok from the DVD. During the Premiere Elements 8 startup process the following message appears: A reduced set of content (Instant Movie Themes, Title and Menu Templates,etc) has been installed. To install the full content set, please insert your Content DVD and run Setup.exe. If you do not have a Content DVD please visit http://www.adobe.com/go/pre_additional_downloads to download the content installer. Since I didn't have the Content DVD, I did the download, which took nearly 12 hours. The extract appeared to complete at 100%, but then immediately gave the error message "A problem occurred while extracting some files. Check available space on your computer and the write privileges on the destination folder." Why would it show this error message if it had completed the extract process 100%? What step(s) do I take now to have Content installed? Do I need to go thru the 12 hour download again or, hopefully, is there something I can do that will make it unnecessary to download again?

    Read the article

  • Password Policy seems to be ignored for new Domain on Windows Server 2008 R2

    - by Earl Sven
    I have set up a new Windows Server 2008 R2 domain controller, and have attempted to configure the Default Domain Policy to permit all types of passwords. When I want to create a new user (just a normal user) in the Domain Users and Computers application, I am prevented from doing so because of password complexity/length reasons. The password policy options configured in the Default Domain Policy are not defined in the Default Domain Controllers Policy, but having run the Group Policy Modelling Wizard these settings do not appear to be set for the Domain Controllers OU, should they not be inherited from the Default Domain policy? Additionally, if I link the Default Domain policy to the Domain Controllers OU, the Group Policy Modelling Wizard indicates the expected values for complexity etc, but I still cannot create a new user with my desired password. The domain is running at the Windows Server 2008 R2 functional level. Any thoughts? Thanks! Update: Here is the "Account policy/Password policy" Section from the GPM Wizard: Policy Value Winning GPO Enforce password history 0 Passwords Remembered Default Domain Policy Maximum password age 0 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 0 characters Default Domain Policy Passwords must meet complexity Disabled Default Domain Policy These results were taken from running the GPM Wizard at the Domain Controllers OU. I have typed them out by hand as the system I am working on is standalone, this is why the table is not exactly the wording from the Wizard. Are there any other policies that could override the above? Thanks!

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Use shared 404 page for virtual hosts in Nginx

    - by Choy
    I'd like to have a shared 404 page to use across my virtual hosts. The following is my setup. Two sites, each with their own config file in /sites-available/ and /sites-enabled/ www.foo.com bar.foo.com The www directory is set up as: www/ foo.com/ foo.com/index.html bar.foo.com/ bar.foo.com/index.html shared/ shared/404.html Both config files in /sites-available are the same except for the root and server name: root /var/www/bar.foo.com; index index.html index.htm index.php; server_name bar.foo.com; location / { try_files $uri $uri/ /index.php; } error_page 404 /404.html; location = /404.html { root /var/www/shared; } I've tried the above code and also tried setting error_page 404 /var/www/shared/404.html (without the following location block). I've also double checked to make sure my permissions are set to 775 for all folders and files in www. When I try to access a non-existent page, Nginx serves the respective index.php of the virtual host I'm trying to access. Can anyone point out what I'm doing wrong? Thanks!

    Read the article

  • How do I create a VBA macro that will copy data from an entry sheet, into a summary sheet by date

    - by Mukkman
    I'm trying to create a macro that will copy data from a data entry sheet into a summary sheet. The entry sheet is going to be cleared daily so I can't use a formula just to reference it. I want the user to be able to enter a date, run a macro, and have the macro copy the data from the entry sheet into the cells for the corresponding date on the summary sheet. I've looked around and found bits and pieces of how to do this but I can't put it all together. Update: Thanks to the information below I was able to find some additional data. I have a pretty crude macro that works if the user manually selects the correct cell. Now I just need to figure out how to automatically select the current cell relative to the current date. Sub Update_Deposits() ' ' Update_Deposits Macro ' Dim selectedDate As String Dim rangeFound As Range selectedDate = Sheets("Summary Sheet").Range("F3") Set rangeFound = Sheets("Deposits").Cells.Find(CDate(selectedDate)) Dim Total1 As Double Dim Total2 As Double Dim Total3 As Double Dim Total4 As Double Dim Total5 As Double Total1 = Sheets("Summary Sheet").Range("E6") Total2 = Sheets("Summary Sheet").Range("E7") Total3 = Sheets("Summary Sheet").Range("E8") Total4 = Sheets("Summary Sheet").Range("E9") Total5 = Sheets("Summary Sheet").Range("E10") If Not (rangeFound Is Nothing) Then rangeFound.Offset(0, 2) = Total1 rangeFound.Offset(0, 3) = Total2 rangeFound.Offset(0, 4) = Total3 rangeFound.Offset(0, 6) = Total4 rangeFound.Offset(0, 7) = Total5 End If ' End Sub This version will find the first value on the page and fill in values: Sub Update_Deposits() ' ' Update_Deposits Macro ' Dim selectedDate As String Dim rangeFound As Range selectedDate = Sheets("Summary Sheet").Range("F3") Set rangeFound = Sheets("Deposits").Cells.Find(CDate(selectedDate)) Dim Total1 As Double Dim Total2 As Double Dim Total3 As Double Dim Total4 As Double Dim Total5 As Double Total1 = Sheets("Summary Sheet").Range("E6") Total2 = Sheets("Summary Sheet").Range("E7") Total3 = Sheets("Summary Sheet").Range("E8") Total4 = Sheets("Summary Sheet").Range("E9") Total5 = Sheets("Summary Sheet").Range("E10") If Not (rangeFound Is Nothing) Then rangeFound.Offset(0, 2) = Total1 rangeFound.Offset(0, 3) = Total2 rangeFound.Offset(0, 4) = Total3 rangeFound.Offset(0, 6) = Total4 rangeFound.Offset(0, 7) = Total5 End If ' End Sub

    Read the article

  • Multiple IPs on firewall, are these virtual interfaces or what?

    - by Jakobud
    We have 5 static IP addresses from our ISP: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 On our firewall box, the NIC that is connected to our cable modem, appears to have all 5 IP addresses set on it. A previous IT guy set this thing up, and I'm not sure exactly what he did. Are these virtual interfaces on this NIC or what? Here is my ip addr output for that NIC: rwd0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.180/24 brd XXX.XXX.XXX.186 scope global rwd0 inet XXX.XXX.XXX.181/29 brd XXX.XXX.XXX.186 scope global rwd0:FWB9 inet XXX.XXX.XXX.182/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB10 inet XXX.XXX.XXX.183/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB11 inet XXX.XXX.XXX.184/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB12 inet6 fe80::250:8bff:fe61:5734/64 scope link valid_lft forever preferred_lft forever I'm a bit new to firewalls and networking so I'm just trying to figure out what he had going on here. I know he used Firewall Builder to configure the iptables rules, maybe that has something to do with the "FWB" I see in those names? So my questions are: What is going on here? Virtual Interfaces? Or something else? If we want to put in a second firewall in parallel with this firewall but we only want it to handle traffic to XXX.XXX.XXX.182, how do we get rid of the static XXX.XXX.XXX.182 address on this existing firewall box?

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

  • Problem with TL-R480T+ and static routes

    - by Globulopolis
    Hi! I've some question about this router. Before starting, some configurations, specified by my provider. Wan1 VPN IP - 192.168.172.84 Mask - 255.255.255.0 Gateway - 192.168.172.253 DNS - 195.110.6.7 Wan2 Dynamic IP DHCP - 168.120.1.34 Mask - 255.255.255.0 Router IP 192.168.1.1 Computer IP 192.168.1.7 Routes: route -p add 192.168.0.0 mask 255.255.0.0 192.168.172.253 route -p add 195.110.6.0 mask 255.255.254.0 192.168.172.253 route -p add 88.135.112.0 mask 255.255.240.0 192.168.172.253 route -p add 178.219.160.0 mask 255.255.240.0 192.168.172.253 For first provider I need to provide a routes. 'Cause router does not support different routes for different WAN interfaces I put them in "Static routes". But when I try to save them I've got an error: Destination IP address can not be set in a same subnet with the WAN or LAN IP address. If I change IP's to local like 192.168.x.x router tell me: Gateway must be set in a same subnet with WAN or LAN IP address. Changing mask on WAN1 interface to 255.255.0.0 doesn't help. Any ideas? PS! Or maybe I'm must email to TP-Link support?

    Read the article

  • subdomains to different VMs on one IP address

    - by efbenson
    I have a setup at home with an ESXi server Freenas server and several win7 clients. I have a domain refactoringme.com I set the @ and WWW domain record to my (current) IP address. I then forwarded port 80 to my local win2k3 server on my Linksys router and used host name matching to run the 5 test sites I have. That all works. Now I want to use the turnkey machines and move to dedicated VM servers. One for a wiki one for SVN etc. So how do I get www.refactoringme.com to go to one internal IP address and wiki.refactoringme.com to different internal IP address, while they both use the same external IP address? I added the additional record for wiki to my domain and pointed it to @. I figured it had to be involving a real firewall. So I installed PFSense on a VM and set it on the DMZ on my Linksys. From this point I haven't had any luck. I thought that maybe it would be in the DNS Forwarder or maybe in the Rules sections but neither have worked. Am I doing it wrong or on the right track but am missing something. Thanks for all the help.

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 120

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • OSX root user keeps re-enabling itself on reboot

    - by geodave
    Running Snow Leopard. Completely inexplicably, I seem to have enabled the OSX root user by accident. I honestly have no idea how it happened, but if memory serves I was looking at the login pane (with my two user accounts) when I must have hit something, and suddenly the two accounts were replaced by one that just said "Other..." Clicking the "Other..." account allows me to type a username and password, but neither of the normal two accounts would work. Since I never set a root password, it wouldn't let me in that way either. So I booted into Single User mode and ran these commands: /sbin/mount -uw / fsck -fy launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist dscl . -passwd /Users/root newpassword and that let me login as root. Then, I went to System Preferences, Accounts, Login Options, clicked Join, Open Directory Utility, and lastly in the Edit menu I clicked "Disable Root User" Great, I thought, back to normal. Except rebooting, I still only have the Other... account visible, and the root password I set beforehand doesn't work anymore! I have to reboot into Single User Mode and go through the whole process again just to get back into the system (as root) How on Earth did I accidentally enable this? I didn't even know about the Directory Utility before now. And most importantly, why the heck would it be re-enabling the root user on boot? Thanks in advance to any help!

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • Windows 7 network performance tuning for LAN

    - by Hubert Kario
    I want to tune Windows 7 TCP stack for speed in a LAN environment. Bit of background info: I've got a Citrix XenServer set up with Windows 2008R2, Windows 7 and Debian Lenny with Citrix kernel, Windows machines have Tools installed the iperf server process is running on different host, also Debian Lenny. The servers are otherwise idle, tests were repeated few times to confirm results. While testing with iperf 2008R2 can achieve around 600-700Mbps with no tuning what so ever but I can't find any guide or set of parameters that will make Windows 7 achieve anything over 150Mbps with no change in TCP window size using -w parameter to iperf. I tried using netsh autotuining to disabled, experimental, normal and highlyrestricted - no change. Changing congestionprovider doesn't do anything, just as rss and chimney. Setting all the available settings to same values as on Windows 2008R2 host doesn't help. To summarize: Windows 2008R2 default settings: 600-700Mbps Debian, default settings: 600Mbps Windows 7 default settings: 120Mbps Windows 7 default, iperf -w 65536: 400-500Mbps While the missing 400Mbps in performance I blame on crappy Realtek NIC in the XenServer host (I can do ~980Mbps from my laptop to the iperf server) it doesn't explain why Windows 7 can't achieve good performance without manually tuning window size at the application level. So, how to tune Windows 7?

    Read the article

< Previous Page | 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008  | Next Page >