Search Results

Search found 27229 results on 1090 pages for 'default aspx'.

Page 845/1090 | < Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • Routing multiple static IPs from ISP at the cable modem?

    - by Jakobud
    I'm taking over IT responsibilities for a previous IT guy. We have a 50mb cable modem connection from Comcast along with 5 static IP addresses: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 We are in the process of replacing our firewall machine. Currently the firewall box is the only thing connected to the cable modem. However the cable modem has multiple ethernet ports on it, similarly to a router. I have assembled a new firewall machine and its time to start testing and configuring it. So that means that I also need it plugged into the cable modem (remember it has multiple ethernet ports on it). So now with multiple computer plugged into the cable modem, how does the cable modem know where to route the traffic? If some request on the internet is made to XXX.XXX.XXX.181, which goes to our cable modem, how does the cable modem know which connected computer that traffic is supposed to be sent? Looking at the web interface for the cable modem, there doesn't seem to be anything special setup on it with regards to routing or NATing IP addresses. Is that because when there is only one computer connected to the modem, all traffic is sent to it by default? Now that I am going to (temporarily) have multiple computers plugged into the cable modem, do I need to specify routing or NAT rules on the modem itself? I am going to speak to Comcast about this next, but I figured I'd ask here first just so I can get a better grasp on how this type of thing generally plays out.

    Read the article

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Persistent static route stops working after VPN drops and reconnects

    - by user76157
    I've got a VPN between two networks, one home and one office (A and B). Their subnets are: (A) 192.168.1.0 and (B) 192.168.0.0 The two networks have identical ADSL routers. Unfortunately these can only do dial-out VPN. So I've got a Windows 2008 server on Network B acting as a VPN server (ServerB). Network A's router (RouterA) passes through Network B's router and connects via PPTP to ServerB. RouterA is assigned the static IP 192.168.0.40 on Network B. There's a persistent static route on ServerB telling it to use 0.40 for all requests to Network A's subnet, 192.168.1.0. (route -p add 192.168.1.0 mask 255.255.255.0 192.168.0.40). This enables ServerB to ping all machines on A (and those machines to ping ServerB). The VPN connection occasionally drops (I'm not sure why - it's set to remain always on and seems to drop randomly). This wouldn't be too much of a problem, as it reconnects automatically and quickly, except that when it does reconnect, the static route on ServerB no longer works. Route print (on ServerB) shows that the persistent static route still exists. However a tracert to a machine on Network A doesn't use the static route; it tries instead to use ServerB's default gateway (which is RouterB), and fails to find the machine. Deleting and re-adding the static route fixes the problem - a tracert uses the static route. At the moment, a batch file to delete and re-add the static route is scheduled to run every day. But this is clearly far from an ideal solution! I hope that's not too confusing. Any help would be very much appreciated.

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • Failed to open the Parallels networking module

    - by user49204
    I'm running Parallels Desktop 7 on OSX Lion and encounter the issue in the subject on every time I'm trying to launch Parallels VM. The error message contains a hint to restore to network configuration default which does not help. As been advised by some forums to ran the following script: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hypervisor.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hid_hook.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_usb_connect.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_vnic.kext" The output of: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" is: Diagnostics for /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext: Warnings: The booter does not recognize symbolic links; confirm these files/directories aren't needed for startup: /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeDirectory /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeRequirements /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeResources /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeSignature Dependency Resolution Failures: No kexts found for these libraries: com.parallels.kext.prl_hypervisor I've noticed that prl_netbridge is not being loaded (when I'm trying to unload it, I'm notified it is not loaded). Am I doing something wrong? What can be the reason for such behaviour?

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • redmine gives 404 error after installation

    - by Sankaranand
    I am using Debian squeeze with nginx and mysql. After raking db and loading default data to redmine. When i try to visit redmine in a browser, http://ipaddress:8080/redmine, I get a 404 error Page not found The page you were trying to access doesn't exist or has been removed. My nginx configuration file, below: server { listen 8080; server_name localhost; server_name_in_redirect off; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } location ^~/phpmyadmin/ { root /usr/share/phpmyadmin; index index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/$fastcgi_script_name; } location /redmine/ { root /usr/local/lib/redmine-1.2/public; access_log /usr/local/lib/redmine-1.2/log/access.log; error_log /usr/local/lib/redmine-1.2/log/error.log; passenger_enabled on; allow all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } location /phpMyadmin { rewrite ^/* /phpmyadmin last; } I don't know what the problem is - this is my second attempt to install redmine in Debian with nginx.

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything. I am already having cwRsync installed on my widows and running it in daemon mode using --damemon option. And I am able to login using ssh from windows machine to linux machine. When I issue bellow command, it just blocks for 120 seconds (timeout I specified in command) and exits saying there is timeout. rsync -e "ssh -l piyush" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" After starting rsync on widows, I checked, rsyc is running. And widows firewall setting are set to minimal, and on Linux machine stopped iptables service so that port 873 (default rsync port) is not blocked. What can be the possible reason that Linux machine is not able to connect to rsync-daemon on windows machine?

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • Ubuntu Postfix Gmail SMTP Relay Not Working

    - by Nick DeMayo
    I currently have postfix set up to relay messages from my websites through gmail, and up until recently it was working perfectly. However, within the last week or so (not really sure when) I started getting the below error whenever attempting to send an email: Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[2001:4860:800a::6c]:587: Network is unreachable Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.109]:587: Connection refused Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.108]:587: Connection refused Here is my configuration file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h #readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = [my domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #myorigin = /etc/mailname mydestination = [my host name], localhost.localdomain, localhost relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only inet_protocols = all ########################################## ##### non debconf entries start here ##### ##### client TLS parameters ##### smtp_tls_loglevel=1 smtp_tls_security_level=encrypt smtp_sasl_auth_enable=yes smtp_sasl_password_maps=hash:/etc/postfix/sasl/passwd smtp_sasl_security_options = noanonymous ##### map username@localhost to [email protected] ##### smtp_generic_maps=hash:/etc/postfix/generic Nothing changed on my server, as far as I know...any ideas what could have caused it to stop working?

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • dnsmasq(as DHCP server) isn't working in KVM+libvirt envirmont

    - by user2681054
    I'm using dnsmasq as DHCP server in VM environment. But It didn't working. I disabled basic DHCP feature in libvirt. <network> <name>default</name> <uuid>84da0678-e56d-8fc2-6f8b-e8eba784849a</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:7B:64:0B'/> <ip address='192.168.122.1' netmask='255.255.255.0'> </ip> </network> As you can see, I removed this tag! <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> </dhcp> And I installed dnsmasq in Host machine. During installation dnsmasq, there was an error message about 127.0.0.1.(dnsmasq: failed to create listening socket for 127.0.0.1) So I commented out listen-address option, and added dhcp-range/dhcp-option options, like this. listen-address=127.0.0.1 dhcp-range=192.168.122.100,192.168.122.200,24h dhcp-option=option:router,192.168.122.1 That's all I've done with dnsmasq. But guest VM couldn't get IP address from host which is dnsmasq server running. After that , I installed isc-dhcp-server instead of dnsmasq.... and it works! But I still want to use dnsmasq instead of isc-dhcp-server. Are there any helping hands? I disabled host machine's firewall. I've heard that libvirt basically use dnsmasq. Is this the reason why I couldn't use dnsmasq in libvirt environment?

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

< Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >