Search Results

Search found 15089 results on 604 pages for 'jeff home'.

Page 100/604 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • /etc/environment not being read into PATH variable

    - by Dan
    In Ubuntu the path variable is stored in /etc/environment. This is mine (I've made no changes to it, this is the system default): $ cat /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" but when I examine my PATH variable: $ echo $PATH /home/dan/bin:/home/dan/bin:/bin:/usr/bin:/usr/local/bin:/usr/bin/X11 You'll notice /usr/games is missing (it was there up until a few days ago). My /etc/profile makes no mention of PATH. My ~/.profile is the default and only has: if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi This only happens in gnome, not in tty1-6. This is missing from both gnome terminal and when I try to call applications from the applications dropdown. Anyone know what could be causing this? Thanks.

    Read the article

  • Default Document not posting for IIS 7

    - by Nikshep
    I am using URL rerouting in Asp.net 4.0 and my default page for the site is \home where user's can log in the app.So when the users type in my site's url i.e www.domain.com cause of the defualt page config which I have it gets redirected to my home.aspx page which is mapped on my global.asax as \home. Now all the log in request i.e Post request coming from www.domain.com are failing no events are being fired on the server. Where as if I try www.domain.com\home then things starts working I am able to log on. I had read a similar issue but still am confused about the solution http://forums.iis.net/t/1164877.aspx , this used to work fine on IIS 6 but on IIS 7 such a scenario started happening. Am I missing some configuration , please help.

    Read the article

  • Silverlight Cream for June 17, 2010 -- #885

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Antoni Dol, Jeff Prosise, David Anson, and John Papa. Shoutouts: Rob Davis has a World Cup Football Stadium tour in Silverlight, Azure, and Bing Maps up: The World Cup Map... cruise around this... tons of features. The Silverlight Team Blog reports that NBC sports is streaming the US Open in Silverlight Adam Kinney announced Expression Studio 4 Launch keynote videos are available From SilverlightCream.com: Data Driven Applications with MVVM Part III: Validation, Bringing the UI Closer Zoltan Arvai's 3rd (and final) part of the Data-Driven MVVM apps is up at SilverlightShow. In this final section he is focusing on validation, and discussion of closer integration to the view. Focus on FocusVisualElement in Silverlight buttons Antoni Dol has a cool post up about the FocusVisualElement, and uses a button to demonstrate how it can be used. Dynamic XAP Discovery with Silverlight MEF Jeff Prosise is discussing Silverlight and MEF ... but better than the normal loading XAP files ... he's doing dynamic discovery of XAP files ... and makes it look easy! Updated analysis of two ways to create a full-size Popup in Silverlight David Anson revisits a prior post with an eye toward Silverlight 4. The feature he's discussing is that you can now hook the Resized event without having browser zoom disabled... and he demonstrates it's use in the code from the old post. Silverlight as a Transmedia Platform (Silverlight TV #33) Jesse Liberty joins John Papa this week with Silverlight TV #33, discussing Transmedia and Silverlight as a Transmedia Platform. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Can't upload project to PPA using Quickly

    - by RobinJ
    I can't get Quickly to upload my project into my PPA. I've set up my PGP key and used it so sign the code of conduct, and the PPA exists. I don't know what other usefull information I can supply. robin@RobinJ:~/Ubuntu One/Python/gtkreddit$ quickly share --ppa robinj/gtkredditGet Launchpad Settings Launchpad connection is ok gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 138, in <module> license.licensing() File "/usr/share/quickly/templates/ubuntu-application/license.py", line 284, in licensing {'translatable': 'yes'}) File "/usr/share/quickly/templates/ubuntu-application/internal/quicklyutils.py", line 166, in change_xml_elem xml_tree.find(parent_node).insert(0, new_node) AttributeError: 'NoneType' object has no attribute 'insert' ERROR: share command failed Aborting I reported this as a bug on Launchpad, because I assume that it is a bug. If you know a quick workaround, please let me know. https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/1018138

    Read the article

  • sudo ./starling start works well but sudo service starling start fails

    - by Keating Wang
    sudo ./starling start works well but sudo service starling start fails $ sudo ./starling start * Starting Starling Server... [ OK ] $ sudo ./starling stop * Stop Starling Server... [ OK ] $ sudo service starling stop * Starting Starling Server... /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in `to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:in `to_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in `gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in `<main>' The error above is 'cannot find gem starling' Following the starling file(located in /etc/init.d, rwxrwxrwx): set -e LOGFILE=/var/log/starling/starling.log SPOOLDIR=/var/spool/starling PORT=22122 LISTEN=127.0.0.1 PIDFILE=/var/run/starling.pid NAME=starling DESC="Starling" INSTALL_DIR=/home/keating/.rvm/gems/ruby-1.9.2-p290/bin/ DAEMON=$INSTALL_DIR/$NAME SCRIPTNAME=/etc/init.d/$NAME OPTS="-d" . /lib/lsb/init-functions d_start() { log_begin_msg "Starting Starling Server..." start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $OPTS || log_end_msg 1 log_end_msg 0 } d_stop() { log_begin_msg "Stopping Starling Server..." start-stop-daemon --stop --quiet --pidfile $PIDFILE || log_end_msg 1 log_end_msg 0 } case "$1" in start) d_start ;; stop) d_stop ;; restart|force-reload|reload) d_stop sleep 2 d_start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" exit 3 ;; esac exit 0

    Read the article

  • Resolving DNS queries for two disconnected, private, networks

    - by Mikeage
    I'm trying to setup two PCs (one Windows, one Linux, but my understanding is that this problem is more DNS and less OS) as follows: Home network: 192.168.1.0/24 VPN (via OpenVPN server not within the home network): 192.168.2.0/24 . I would like a PC on both networks to be able to access three different types of site: Internet addresses Addresses on the home network Addresses on the vpn However, I'm not sure how/which DNS servers to use. If I prioritize my home DNS server, I can resolve (1) and (2), but not (3). If I prioritize my VPN DNS server, I can't resolve addresses of type (2). Of course, looking up addresses via nslookup and explicitly setting the correct server works, so I know my local DNS servers are OK. Is there any way I can set up my PCs to fallback on the second DNS server if there is no response? Alternatively, is there any way I can tell different queries to go to different servers [maybe by setting up different subdomains; foo.local.something vs. bar.vpn.something]? Thanks

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • VMWare Fusion - Cannot communicate between Host Mac and Virtual Mac running on same machine [migrated]

    - by Jeff Gold
    I'm running a "virtual" Mac OS machine on a Mac running VMWare Fusion. The Virtual Mac is setup with Bridged Networking, and has its own separate IP address. The outside world can connect to either the Mac itself or the virtual Mac via their respective IP addresses just fine, this works great! The problem... the Mac itself cannot connect to the virtual Mac's IP address, nor can the virtual Mac connect to the real Mac's IP address. Some things I've read mention something about enabling VMCI, but I have no idea how to do this, or if this is even the correct solution. Any suggestions?

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • Java Champion Jim Weaver on JavaFX

    - by Janice J. Heiss
    Hardly anyone knows more about JavaFX than Java Champion and Oracle’s JavaFX Evangelist, Jim Weaver, who will be leading two Hands on Labs on aspects of JavaFX at this year’s JavaOne: HOL11265 – “Playing to the Strengths of JavaFX and HTML5” (With Jeff Klamer - App Designer, Jeff Klamer Design) Wednesday, Oct 3, 3:00 PM - 5:00 PM - Hilton San Francisco - Franciscan A/B/C/D HOL3058 – “Custom JavaFX Controls” (With Gerrit Grunwald, Senior Software Engineer, Canoo Engineering AG; Bob Larsen, Consultant, Larsen Consulting; and Peter Vašenda, Software Engineer, Oracle) Tuesday, Oct 2, 12:30 PM - 2:30 PM - Hilton San Francisco - Franciscan A/B/C/D I caught up with Jim at JavaOne to ask him for a current snapshot of JavaFX. “In my opinion,” observed Weaver, “the most important thing happening with JavaFX is the ongoing improvement to rich-client Java application deployment. For example, JavaFX packaging tools now provide built-in support for self-contained application packages. A package may optionally contain the Java Runtime, and be distributed with a native installer (e.g., a DMG or EXE). This makes it easy for users to install JavaFX apps on their client machines, perhaps obtaining the apps from the Mac App Store, for example. Igor Nekrestyanov and Nancy Hildebrandt have written a comprehensive guide to JavaFX application deployment, the following section of which covers Self-Contained Application Packaging: http://docs.oracle.com/javafx/2/deployment/self-contained-packaging.htm#BCGIBBCI.“Igor also wrote a blog post titled, "7u10: JavaFX Packaging Tools Update," that covers improvements introduced so far in Java SE 7 update 10. Here's the URL to the blog post:https://blogs.oracle.com/talkingjavadeployment/entry/packaging_improvements_in_jdk_7”I asked about how the strengths of JavaFX and HTML5 interact and reinforce each other. “They interact and reinforce each other very well. I was about to be amazed at your insight in asking that question, but then recalled that one of my JavaOne sessions is a Hands-on Lab titled ‘Playing to the Strengths of JavaFX and HTML5.’ In that session, we'll cover the JavaFX and HTML5 WebView control, the strengths of each technology, and the various ways that Java and contents of the WebView can interact.”And what is he looking forward to at JavaOne? “I'm personally looking forward to some excellent sessions, and connecting with colleagues and friends that I haven't seen in a while!” Jim Weaver is another good reason to feel good about JavaOne.

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast. However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: West coast to east coast: 215 ms latency, 46 ms transfer time, 261 ms total West coast to west coast: 114 ms latency, 41 ms transfer time, 155 ms total It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... Does routing distance affect performance significantly? How does geography affect network latency? Latency in Internet connections from Europe to USA ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Puppet configuration file on Windows

    - by Jeff Storey
    I'm running puppet on windows as an admin (testing on windows 7, even though it is not officially supported). When I install puppet following the windows installation instructions, no puppet.conf file is generated in C:/ProgramData/PuppetLabs/puppet/etc. I can run puppet agent --genconfig to create one, but regardless of what values I put in there, it doesn't seem to respect them. Is this just a puppet/windows issue? Or am I doing something wrong?

    Read the article

  • scanning only works under "sudo" (Ubuntu)

    - by JoelFan
    When I try to scan, using simple-scan, the UI says Failed to scan -- Unable to connect to scanner. When I run it from the command line I get: joel@home:/usr/bin$ simple-scan -d ** (simple-scan:6554): DEBUG: Starting Simple Scan 2.32.0.1, PID=6554 ** (simple-scan:6554): DEBUG: Restoring window to 600x400 pixels ** (simple-scan:6554): DEBUG: sane_init () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: SANE version 1.0.22 ** (simple-scan:6554): DEBUG: Requesting redetection of scan devices ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: Requesting scan at 300 dpi from device '(null)' ** (simple-scan:6554): DEBUG: scanner_scan ("(null)", 300, SCAN_SINGLE) ** (simple-scan:6554): DEBUG: sane_get_devices () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: Device: name="brother2:bus4;dev1" vendor="Brother" model="MFC-210C" type="USB scanner" ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: sane_open ("brother2:bus4;dev1") -> SANE_STATUS_IO_ERROR ** (simple-scan:6554): WARNING **: Unable to get open device: Error during device I/O FYI, I have already done: joel@home:~$ sudo chmod a+rwx /dev/bus/usb joel@home:~$ sudo chmod a+rwx /dev/bus/usb/* If I run under sudo: joel@home:~$ sudo simple-scan it works. How can I get simple-scan to work without sudo?

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • XenServer: Editing clone configuration before boot

    - by Jeff Ferland
    Upon cloning a base image, I need to reconfigure basic settings. Regenerating the ssh host key, changing static IP assignments, setting the host name, etc. Because of the network setup, DHCP is not an option. That more or less rules out SSHing in with a predefined key or running a startup script since I can't provide the IP externally. I'd most like to mount the filesystem of the new machine on Dom0, but the lvm volumes are exported and it appears to be Bad Form to import them so the Dom0 machine can see them. What's your best suggestion for altering files in a cloned VM before boot? Must be non-interactive, and I'm going to guess out the gate that scripting access via xe console is not going to work well.

    Read the article

  • NX Client running on OS X 10.6.3 => NX Server Ubuntu 10.04: weird keymapping issue

    - by Mike D
    I have been using Ubuntu 9.10 at work after switching from vista. After being (expectedly) disappointed with performance over VNC (via VPN) when logging in from home, I came across the NOMACHINE suite. Last week, I upgraded from OS X 10.6.2 = 10.6.3 at home. After that, I also updated my NX Client at home to the latest version, as there were issues with recent changes in the OSX X11 setup that rendered the NX connection useless. At that point, everything worked fine. Fast forward, I upgraded from 9.10 = 10.04 on my work machine the next day, and after coming home and trying to log-in remotely, I noticed that the "s" and "m" keys, when pressed locally, acted as if the meta key was being pressed on the remote machine. That is, the "s" key opens in the Ubuntu login menu (the power icon), and the "m" key opens the messaging menu. I found some info on using xmodmap to remap keys, however, I can't even begin to fathom what keys I could remap to solve this issue. Any ideas?

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by sososo
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by Blankman
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • How To Setup Email Alerts on Linux Using Gmail or SMTP

    - by Sysadmin Geek
    Linux machines may require administrative intervention in countless ways, but without manually logging into them how would you know about it? Here’s how to setup emails to get notified when your machines want some tender love and attention. Of course, this technique is meant for real servers, but if you’ve got a Linux box sitting in your house acting as a home server, you can use it there as well. In fact, since many home ISPs block regular outbound email, you might find this technique a great way to ensure you still get administration emails, even from your home servers. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Windows 2003 Active Directory Integrated DNS zone not registering non-domain computers

    - by Jeff Willener
    I'm not a networking guy by all means, I'm just a developer who dabbles enough to get into trouble and I'm there. So bear with me... :) At my office I have a Windows 2003 Domain Controller which also services DNS. On the domain I have a handful of computers and other misc. equipment/toys. For the DNS I only created a Forward Lookup Zone for my domain (mydomain.com). I run a lot of VM's so generally I have everything on the domain, however some of those VM's are not and only in a 'Workgroup'. I also have another laptop which belongs to another domain (otherdomain.com) which is here 100% but I use it for other purposes and has to belong to the otherdomain.com. With all that said, I have two questions: I have found any computer not on mydomain.com does not register it's IP address even though 'Register this connections address in DNS' is set to in the 'Advanced TCP/IP Settings' for the nic. Where have I messed this up? On the laptop which is registered on otherdomain.com, when I do a nslookup for a computer on mydomain.com (e.g. nslookup devbox1) it appends otherdomain.com as the suffix (e.g. queries devbox1.otherdomain.com). Same thing occurs if I use the fully qualified name. In the 'Advanced TCP/IP Settings' for that nic, I can 'Append these DNS suffixes' of mydomain.com but I fear that will hose my DNS lookups when I VPN to otherdomain.com. So what is the correct approach to resolve this issue? Do I add both mydomain.com and otherdomain.com in that order?

    Read the article

  • apache virtual host concept and dns

    - by Subhransu
    I want to have around 60 repositories of projects and I want to serve them from a dedicated remote server(ubuntu) with the help of mercurial server so that all my developers will be able to update their changes. I have followed this article in order to do that but stocked in the Apache Configuration Step (section 2.5 2.5.4). I have some following questions: What are the steps I need to follow to make apache to serve /home/hg/repositories/private/hgweb.cgi when I enter dev.example.com/private ? Is my virtual host file is correct or do I need to change anything ? I bought the example.com and how to make it to serve dev.example.com/private. Do I need to add A name(like : subdomain.example.com and then IP of my server) in the cpannel of hosting company? ServerAdmin webmaster@localhost ServerName dev.example.com ScriptAlias /private /home/hg/repositories/private/hgweb.cgi <Directory /home/hg/repositories/private/> Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex hgweb.cgi AuthType Basic AuthName "Mercurial repositories" AuthUserFile /home/hg/tools/hgusers Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/dev.example.com_error.log # Possible values include: debug, info, notice, warn, error, cr$ # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/dev.example.com_ssl_access.lo$ SSLEngine on SSLCertificateFile "/etc/apache2/ssl/dev.example.com.crt" SSLCertificateKeyFile "/etc/apache2/ssl/dev.example.com.k$ NOTE: The above is my virtual host file. I have not enabled the site yet and also not changed any host or hostname or httpd.conf file.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >