Search Results

Search found 22298 results on 892 pages for 'default'.

Page 659/892 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Overriding routes on Openvpn client, iproute, iptables2

    - by sarvavijJana
    I am looking for some way to route packets based on its destination ports switching regular internet connection and established openvpn tunnel. This is my configuration OpenVPN server ( I have no control over it ) OpenVPN client running ubuntu wlan0 192.168.1.111 - internet connected if Several routes applied on connection to openvpn from server: /sbin/route add -net 207.126.92.3 netmask 255.255.255.255 gw 192.168.1.1 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 And I need to route packets regarding it's destination ports for ex: 80,443 into vpn everything else directly to isp connection 192.168.1.1 What i have used during my attempts: iptables -A OUTPUT -t mangle -p tcp -m multiport ! --dports 80,443 -j MARK --set-xmark 0x1/0xffffffff ip rule add fwmark 0x1 table 100 ip route add default via 192.168.1.1 table 100 I was trying to apply this settings using up/down options of openvpn client configuration All my attempts reduced to successful packet delivery and response only via vpn tunnel. Packets routed bypassing vpn i have used some SNAT to gain proper src address iptables -A POSTROUTING -t nat -o $IF -p tcp -m multiport --dports 80,443 -j SNAT --to $IF_IP failed in SYN-ACK like 0 0,1 0,1: "70","192.168.1.111","X.X.X.X","TCP","34314 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18664016 TSER=0 WS=7" "71","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584430 TSER=18654692 WS=5" "72","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584779 TSER=18654692 WS=5" "73","192.168.1.111","X.X.X.X","TCP","34343 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18673732 TSER=0 WS=7" I hope someone has already overcome such a situation or probably knows better approach to fulfill requirements. Please kindly give me a good advice or working solution.

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • Internet connection & IIS stopped on windows xp after VMware server 2 installation

    - by Eduardo Xavier
    Hi, I'm running a local network. My IP ranges from 192.168.1.2 to 192.168.1.15. All IP are static ones. And my router's IP is 192.168.1.1 and I provide it as default gateway and preferred DNS server on client machines. Everything worked fine on this scenario. I could use internet and reach services on other machines. BUT I have installed VMware server 2 on the windows XP to host windows 2003 Virtual Machine (VM). I set the following configuration: Windows XP's => 192.168.1.11. Windows 2003 => 192.168.1.12. (virtual machine) This approach worked just fine as it used to work with Microsoft Virtual PC. I can access mysql & IIS websites on the windows 2003 virtual machine. BUT two things doesn't work anymore on the Windows XP: internet connection - but I can see the MAC address on the wireless router IIS - Ping on 127.0.0.1 it's ok as I can hit localhost:8222 nor localhost Does anyone knows how to fix any of this? (at least the internet connection)

    Read the article

  • Launch synergy client on boot in Mac OS X

    - by Herms
    I have a mac as a secondary machine at work. Currently I use synergy on my main machine to share its keyboard and mouse with the mac. I created a launch agent for my user to launch synergy when I log in, and that's working. However, this means I still have to pull out the mac's keyboard and mouse in order to log in. I tried making a user daemon so that it would launch on boot, but I get the following errors in the console: LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Warning>: 3891612: (CGSLookupServerRootPort) Untrusted apps are not allowed to connect to or launch Window Server before login. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : On-demand launch of the Window Server is allowed for root user only. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : Set a breakpoint at CGErrorBreakpoint() to catch errors as they are returned LaunchSynergy[52] _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. Is there a way to get this to work? Looks like the Mac's security doesn't want to allow anything to take control of the window while at the login screen. I can understand that, but I'd like a way to override it, as it would make my life a lot easier.

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • Windows won't sleep after booting from grub

    - by mkasberg
    I recently added a second hard drive to my computer and I am using it do dual-boot Linux (Ubuntu 12.04) with Windows 7. Both hard drives are SATA. I am using the default grub bootloader on my second hard drive. The windows drive is unmodified. To get to grub, I changed the hard disk boot priority in my BIOS (P35-DS3L) to boot from the second drive. The problem I'm having is that when I boot to Windows 7 (on sda) from grub (on sdb), Windows 7 will not go to sleep (from the start menu). The display shuts off momentarily as if its going to sleep, then comes back on and displays the switch-user screen. Powercfg -lastwake does not show anything. I am sure that this is related to booting from grub on sdb because when I change the hard disk boot priority in the BIOS to boot from my (unmodified) Windows hard disk, the computer goes to sleep fine. It occurred to me that installing grub on sda might solve the problem, but I'd rather not since I like to have my windows hard disk unmodified so that booting to it from the BIOS boots directly to windows. A possible work around is to use the BIOS as a bootloader, by pressing F8 to select the boot device. Still, I'd like to know why the problem is happening in the first place.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Network config for KVM on physical machine with single NIC and single public IP

    - by neo0
    I have a physical machine running CentOS 6.4 and I will rent a place to run it in a data center. I want to install KVM on that machine to run some virtual machines. The problem is my physical machine have only one NIC and the data center give me a public IP for that interface. So how should I configure network on the physical machine to make it assign for each vm a private IP that can connect to Internet. If I create a br0 bridged with eth0 interface and create a vm with option --bridge=br0 then KVM could not assign an IP for the vm so setup can not be done. Should I use NAT mode? Does KVM have any host-only network like Virtualbox? But the vm still has to connect to outside? Thank you! Update I install the guest network using NAT (--network network:default) and then I only have to port-forwarding from the host. But if I config br0 bridged with physical eth0 then the guest can not get an IP from boot. So I removed the br0 and it worked.

    Read the article

  • Changing dual-monitor settings without closing the laptop lid on OS X

    - by hekevintran
    I have a Unibody MacBook hooked up to an external display. By default when I boot up, the system will go to dual-monitor mode. I want to use only the external display. The Apple supplied solution to this problem is to close the lid of the laptop which puts the machine into sleep mode and then move the mouse around to wake it up again. Because the machine is being woken up with the lid closed, when the displays are detected the system finds only the external. After the system is functional again, you can open the lid if you want and the laptop screen will be non-functional until you either tell the system to detect displays from the system preferences or you turn off the external display. Every time I want to use only the external display, I must reach my hand over to close the lid, wait for the machine to sleep, jigger the mouse, wait for the machine to wake up, and finally open the lid again because I don't want the machine to overheat. I feel that this is very stupid to have to do. Why is there no button or menu option that says "don't use this screen"? Is there any third-party software way to change the screen setup that does not involve physically closing the lid and playing a game of "are you sleeping" in order to switch such a simple software setting? We are in the 21st Century and honestly this is childish.

    Read the article

  • Patching Synaptics Driver In Ubuntu

    - by danben
    I've got an HP Envy with the new Synaptics Clickpad - the device is not supported out of the box by Ubuntu, so I downloaded this patch (https://patchwork.kernel.org/patch/67335/), applied it to synaptics.c in the linux kernel source, rebuilt the kernel and booted from it. Nothing changed. I'm new to Linux and don't know much about driver configuration; how can I check that the newly-produced synaptics.o file is actually being used somewhere? Or if it is obvious, what step did I leave out? Separately, I tried following the instructions in this thread: http://georgia.ubuntuforums.org/showpost.php?p=8989185&postcount=11 There, someone posted the patch along with a Makefile and instructions for installation. When I followed those, my mouse stopped working altogether. Not desirable, but at least I know it had some effect. The Makefile is posted below; I get the gist of what it's doing, but don't have enough knowledge to figure out why it would cause the device to stop responding entirely. obj-m := psmouse.o psmouse-objs := psmouse-base.o synaptics.o psmouse-$(CONFIG_MOUSE_PS2_ALPS) += alps.o psmouse-$(CONFIG_MOUSE_PS2_ELANTECH) += elantech.o psmouse-$(CONFIG_MOUSE_PS2_OLPC) += hgpk.o psmouse-$(CONFIG_MOUSE_PS2_LOGIPS2PP) += logips2pp.o psmouse-$(CONFIG_MOUSE_PS2_LIFEBOOK) += lifebook.o psmouse-$(CONFIG_MOUSE_PS2_TRACKPOINT) += trackpoint.o psmouse-$(CONFIG_MOUSE_PS2_TOUCHKIT) += touchkit_ps2.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules install: @cp psmouse.ko /lib/modules/$(shell uname -r)/kernel/drivers/input/mouse/

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • Why is the link between my switch and my router always negotiating half-duplex mode?

    - by Massimo
    I have a Cisco 2950 switch which has one of its ports connected to an Internet router provided by my ISP; I have no access to the router configuration, but I manage the switch. If I leave all switch ports with their default setup (auto-negotiation of speed and duplex mode), this link always connects at 100 MBit/s, but in half-duplex mode. I've tried replacing the cable, and also moving the link to another switch port: the result is always the same. A different device connected to the same port (or to any switch port, really) shows no problem at all. It could be guesed that someone configured the router to only connect in half-duplex mode... BUT, here's the catch: if I manually force the switch port to full-duplex mode (duplex full in the interface configuration), the link goes up, stays up and is completely stable. So: The connection is not forced to half-duplex mode by the router, otherwise it would not connect at all if I force the switch end to full-duplex. There is no actual link problem, otherwise the full-duplex connection would not go up or would at least show some errors. But if I leave the port free to auto-negotiate, it always connects in half-duplex mode. Why?

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • Allow access from outside network with dmz and iptables

    - by Ivan
    I'm having a problem with my home network. So my setup is like this: In my Router (using Ubuntu desktop v11.04), I installed squid proxy as my transparent proxy. So I would like to use dyndns to my home network so I could be access my server from the internet, and also I installed CCTV camera and I would like to enable watching it from internet. The problem is I cannot access it from outside the net. I already set DMZ in my modem to my router ip. My first guess is because i'm using iptables to redirect all inside network to use squid. And not allow from outside traffic to my inside network. Here is my iptables script: #!/bin/sh # squid server IP SQUID_SERVER="192.168.5.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP If you know where did I miss, please advice me. Thanks for all your help and I really appreciate it.

    Read the article

  • Cannot login to zabbix web portal

    - by hlx98007
    I've managed to install Zabbix22-server on CentOS 6.x along with php-fpm and nginx. I can view the page of 127.0.0.1 but I can only see this: After clicking the "Login" button, the page is the same: What can I do to make it work as expected, so that I can login as admin? Here are some confs: nginx_zabbix.conf: server { listen 80; add_header X-Frame-Options "SAMEORIGIN"; access_log /var/log/nginx/zabbix.log; error_log /var/log/nginx/zabbix.err.log; client_max_body_size 500M; # This folder is a soft link to /usr/share/zabbix # the permssion has been set to nginx:nginx recursively. root /var/www/zabbix; location / { index index.html index.htm index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; fastcgi_param PATH_INFO $path_info; } } php-fpm is using its default values, with permission user/group set to nginx (rather than apache) Folder /var/lib/php/session has been set to nginx:nginx with permission 770. SELinux is set to disabled. I've restarted everything up to this point.

    Read the article

  • Cisco Catalyst 3750 connected to Cisco ASA 5505 and dropping packets

    - by Bo102010
    (Cross posted from Super User per suggestion there) At the office, I have inherited a network that I am still trying to fully comprehend. I have a problem today with a new connection between: A port on a Cisco Catalyst 3750 [WS-C3750G-48TS-S running C3750-IPSERVICESK9-M version 12.2(53)SE1] A port on a Cisco ASA 5505 [ASA Software version 8.3(2)] The 3750 is home to a Vlan that has a few ports assigned to it. interface Vlan3 description Internal network (172.18.160.0/24) ip address 172.18.160.1 255.255.255.0 I have a host (outside of my control) that needs to be in this VLAN (i.e it must have an address 172.18.160.something/24) that also needs to access the Internet. To accomplish this, I ran a link from the Catalyst (Gi1/0/13) to the ASA (Ethernet 0/5). I configured the Catalyst port like so: interface GigabitEthernet1/0/13 description To ASA, 172.18.160.69 switchport access vlan 3 switchport mode access speed 100 duplex full I configured the ASA like so: interface Vlan1 nameif inside security-level 100 ip address 172.18.160.69 255.255.255.0 interface Ethernet0/5 speed 100 duplex full Then I plugged the host into Ethernet 0/4 on the ASA and instructed its owner to make its default gateway 172.18.160.69. I made a NAT rule in the ASA and set up some rules, and it's able to access the Internet without issue. However, I noticed that the Catalyst reports a ton of packets being dropped toward the ASA. Catalyst3750#show interfaces GigabitEthernet 1/0/13 | include counters|drops Last clearing of "show interface" counters 00:28:13 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 136909347 This is a huge number of drops, since there's not much traffic on this VLAN at all. I tried these things: Make sure speed and duplex agree on both sides (100 Mbps / Full) Set no cdp enable on the Catalyst Gi10/13 Set no keepalive on the Catalyst Gi10/13 Checked for excessive CPU usage on both Checked for excessive traffic on both Am I missing something? Any help would be appreciated.

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >