Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 792/1018 | < Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >

  • Connecting to IPv6 hosts when mobile and on a Surface?

    - by Cerebrate
    Specifically, at my usual location, I have an IPv6 network which connects to the Internet via a static tunnel set up to Hurricane Electric's tunnel broker ( http://www.tunnelbroker.net/ ). This works essentially perfectly, allowing inbound and outbound connectivity. Now, however, I need to connect back to host(s) on that network over IPv6 from mobile tablet(s); meaning the conditions are such that there is no guarantee or even likelihood of native IPv6 support where it happens to be at any given time, and the IPv4 address of the tablet will change on a fairly regular basis. The native Teredo support, as configured by default, functions well enough to let me ping my target hosts, but appears to have neither the reliability nor the throughput to support anything else; I have been unable to make any actual connections (trying a number of TCP-based protocols) using it. I had considered setting up an independent tunnel for the tablet(s), and using scripts to update the client endpoint IP address when it changes, but since both (a) many of the locations will be behind NAT devices over which I have no control, and (b) the option over which I do have control is an AT&T Unite hotspot which does not offer protocol 41 forwarding or respond to ICMP on its public address, this approach does not seem viable. I am additionally constrained as the mobile tablet(s) in question are Surface RTs, and as such are incapable of running, for example, AICCU client software. What is my best option to pursue to obtain IPv6 connectivity in this scenario?

    Read the article

  • What service do you use for music on hold?

    - by Russ Warren
    This may not be a sysadmin question for some, but it is definitely a hurdle I have to jump as the sysadmin for my company. We recently rolled-out a company wide VoiP system (Switchvox, to be exact) that has come preloaded with some royalty-free music on hold. Our customers have been complaining that the music on hold sounds like "funeral music." This may be the case (although I wouldn't want it played at my funeral), but it is all we have and we aren't willing to be sued over using music that isn't properly licensed. So, that brings me to the question asked in the title -- what and/or how do you provide decent music on hold? I'm assuming many people here use a PBX that allows customized music, so this has to apply to many of you. We've been looking at some sites that allow you to download royalty-free music for a one-time fee, but the music seems...lame. Something like a one-year subscription from ibaudio.com seems to be the best bet so far. Have you been able to discover something a little more mainstream for a decent licensing fee? Thank you. EDIT: Our PBX allows the playback of MP3 and OGG files, but does not allow streaming of a live audio source, Internet-based or otherwise. It also does not allow the use of a "line-in" source such as a CD player or radio. Don't let this stop you from sharing your setup, though. I'm interested in hearing what everyone uses!

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • Varnish 3.0.2 to Apache2 sometimes return error 503

    - by Ronnie Jespersen
    Hey guys I hope you can help me out here. I have an Ngingx parsing http and https to a varnish cache(3.0.2). From the varnish it is sent to apache2. Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 Now the backend server (apache) does not have any 503 error in the access log at this point. So I am confused. Is this varnish throwing a 503 because it thinks apache is to slow? There is a lot traffic coming through at this point so I know the server is up and running. I do have other 503 error codes with posts and gets so there is really no pattern. It seems to be at random times and random requests. Even in the morning when the server dosen't seem to be doing anything. I do see another pattern in the log: 4 VCL_call c recv pass 4 VCL_call c hash 4 Hash c /?id=412 4 VCL_return c hash 4 VCL_call c pass pass 4 FetchError c no backend connection 4 VCL_call c error deliver 4 VCL_call c deliver deliver Here fetcherror says "no backend connection". A summery of the FetchErrors in todays log: 16 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 16 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 21 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c no backend connection 4 FetchError c no backend connection 20 FetchError c http first read error: -1 11 (No error recorded) 39 FetchError c http first read error: -1 11 (No error recorded) I haven't changed the default timeout values for varnish. This is my configuration for one of the backend servers. backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } I'm running prefork module on apache2 with this configuration <IfModule mpm_prefork_module> StartServers 1 MinSpareServers 2 MaxSpareServers 5 MaxClients 200 MaxRequestsPerChild 75 </IfModule> and only PHP files is sent to the server. Every other static file is handled by Nginx. Any ideas? ------- EDIT -------------- Some more debuging information I have run a varnishadm debug.health Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy And I have been monitoring varnishstat from the two load balancers 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry Notice that none of them have reported backend_fail. /Ronnie

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Least CPU intensive way of streaming your screen on windows?

    - by sinni800
    Hello, sometimes I like capturing my screen for others to see. Only thing: I am playing games while I do it. I have tried a few streaming solutions where Windows Media Encoder coupled with my own Windows server appealed to me most, because I can change resolutions, etc. I also tried ustream coupled with the Flash applet and the Adobe Flash Encoder recording a Camtasia source. Camtasia has the disadvantage though that it shows the green-and-black-alternating borders and can not be targeted fullscreen. I like how xfire does it. But it doesn't work with every game, many are simply not supported. A few thoughts about this: Is there a program which captures like Fraps or XFire (based on Direct3D and OpenGL outputs) and exposes the output to a DirectShow source filter? Which brings me to: Is there hardware accelerated capturing directly from the graphics card? Maybe including direct encoding with help from OpenCL? Modern graphic cards decode BluRay content directly for example. I should have a modern enough graphics processor for this to be possible (see below). If using Windows Media Encoder: Which are the least CPU intensive settings? Which codec? Is there a newer codec than Windows Media 9? Is it less CPU intensive? I only have 7, 8 and 9 inside the Encoder Could the performance be massively increased by having a Quad-Core CPU (see below)? Bandwidth is no problem up to 1000 to 1500 kbit/s (I have 2048). My Computer specs: Intel Core 2 Duo E8400 4 GB DDR2-800 Ram Ati Radeon HD5770 Using Windows 7 Professional

    Read the article

  • Mod_rewrite (CakePHP routing functionality) forbidden after Snow Leopard upgrade

    - by Ryan Ballantyne
    Hello ServerFault, I am using the standard Apple-provided installations of PHP 5.3 and Apache 2 to do web development on a Mac Pro that I just upgraded to Mac OS X 10.6 (Snow Leopard). The upgrade went well enough, if I ignore the fact that it destroyed my ability to get work done. ;) After the update, the CakePHP application I was developing started giving me 403 Forbidden errors when accessed. Based on the errors in the log file, I've determined that Apache is choking on the mod_rewrite rules in Cake's .htaccess file. Here's the file, in its entirety: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> It's not that the rules themselves are wrong, but that Apache is forbidding the use of mod_rewrite altogether. All other pages on the machine work fine, and the 403 errors go away if I comment out the .htaccess file (but nothing works, of course). In my httpd.conf file, I've tried changing this: <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> To this: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> ...which has no effect. I don't know much about Apache configuration files, and I'm quite stuck on this. In fact, I know little enough that I'm not sure which information about my setup is needed to enable people to provide useful answers. I'm just using the vanilla OS X setup, nothing fancy. Googling has yielded no fruits for me this time, so I'm turning to you. Any ideas?

    Read the article

  • Why is Mac supposedly better than Windows for graphics?

    - by Svish
    Ok, people just keep telling me that if you're going to be working with graphics and design and stuff, you should get a Mac. And I just don't get the logic. Because most of these people would be working with Adobe software, which are for both Windows and Mac. To me it seems like their whole argument is based on that "everyone else does". Like, Mac had some graphics software that Windows didn't earlier in history, so most people were using Mac. And since most people were using Mac, new people also started using Mac. And since most people were using Mac, schools and universities used Mac. Which taught new people to use Mac. So they were using Mac. And told everyone they met that everyone they knew were using Mac. And so on. Anyways... What is the deal really? Is there actually any advantage in using Mac for graphics and design and such things? My take is that you pretty much have the same software and both Mac and Windows are powerful enough, support enough RAM, are stable (as long as you don't install lot's of junk or faulty drivers), et cetera. So, can anyone give me a good explanation on this? Is there a real difference or are people just brainwashed?

    Read the article

  • If spambots are filling out forms on my website, should I mark the mail as spam?

    - by rob
    Apparently spambots have discovered the contact forms my website. I'm considering adding a CAPTCHA, but in the meantime I've been getting dozens of completed website forms in my Gmail account each day. I know I can configure a filter to always allow e-mails from my website to be delivered (i.e., "never send to spam"), but I'm wondering whether there are also larger implications. At first, I was marking the spam forms as spam because I figured it would block the sender's e-mail address from getting through again. But since then, I've given it some more thought. I know many spam filters use Bayesian filters and other techniques to identify spam based on its content. If I mark all these e-mails as spam, will legitimate e-mails from my website be marked as spam? Even if I stop marking the messages as spam, I can see another potential problem. My form is pretty standard (in fact, it's probably very similar to other online forms). If I mark my spambot-submitted website forms as spam, will Gmail begin to mark other people's similar-looking messages as spam?

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • How to install port versions of perl modules for perl5.14 in freebsd 9.0

    - by jm666
    Trying to use perl5.14 on Freebsd with port based p5-modules. uname -impr 9.0-RELEASE amd64 amd64 ALTQ delete all installed ports, start with a clean system # pkg_delete -a # rm -rf /var/db/pkg /var/db/ports /usr/local installing portmaster, checking /etc/make.conf (here is only WITHOUT_X11=YES). Now installing perl. # portmaster -g --force-config lang/perl5.14 # perl -v This is perl 5, version 14, subversion 2 (v5.14.2) built for amd64-freebsd-multi Now perl modules from the ports, # portmaster -g devel/p5-Moose #install Moose and its deps check with pkg_info and got zilion errors like: # pkg_info pkg_info: corrupted record (pkgdep line without argument), ignoring dpendecy check with portmaster - showing dependecies on perl5.12 #portmaster --check-depends Checking p5-Class-C3-0.24 ===>>> lang/perl5.12 is listed as a dependency ===>>> but there is no installed version ===>>> Delete this dependency data? y/n [n] when tried # perl-after-upgrade -f got: Fixed 0 packages (0 files moved, 0 files modified) In short: i got installed Moose into /usr/local/lib/perl5/site_perl/5.14.2/ but all its dependencies into /usr/local/lib/perl5/site_perl/5.12.4/ Yes, it is possible fix this with: # portmaster p5- what reinstall all installed p5-packages once again, now correctly for the 5.14 but it is terrible installing them twice... Questions: What is the correct way install p5-MODULES from ports with installed perl5.14 in an clean system? How to fix wrong dependency data on perl5.12 without the need install and reinstall them again What i'm doing wrong? Ps: know perlbrew and/or Local::lib - but for this case - want port versions.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Apache Config: RSA server certificate CommonName (CN) ... NOT match server name?

    - by mmattax
    I'm getting this in error_log when I start Apache: [Tue Mar 09 14:57:02 2010] [notice] mod_python: Creating 4 session mutexes based on 300 max processes and 0 max threads. [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `*.foo.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `www.bar.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [notice] Apache configured -- resuming normal operations Child processes then seem to seg fault: [Tue Mar 09 14:57:32 2010] [notice] child pid 3425 exit signal Segmentation fault (11) [Tue Mar 09 14:57:35 2010] [notice] child pid 3433 exit signal Segmentation fault (11) [Tue Mar 09 14:57:36 2010] [notice] child pid 3437 exit signal Segmentation fault (11) Server is RHEL, what's going on and what do I need to do to fix this? EDIT As requested, the dump from httpd -M: Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_default_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) cache_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) perl_module (shared) php5_module (shared) python_module (shared) ssl_module (shared) Syntax OK

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • New AD-DC in a new Site is refusing cross-site IPv4 connections

    - by sysadmin1138
    We just added a new Server 2008 (sp2) Domain Controller in a new Site, our first such config. It's over a VPN gateway WAN (10Mbit). Unfortunately it is displaying a strange network symptom. Connections to the SMB ports (TCP/139 and TCP/445) are being actively refused... if the connection is coming in on pure IPv4. If the incoming connection is coming by way of the 6to4 tunnel those connections establish and work just fine. It isn't the Firewall, since this behavior can be replicated with the firewall turned off. Also, it's actually issuing RST packets to connection attempts; something that only happens with a Windows Firewall if there is a service behind a port and the service itself denies access. I doubt it's some firewall device on the wire, since the server this one replaced was running Samba and access to it from our main network functioned just fine. I'm thinking it might have something to do with the Subnet lists in AD Sites & Services, but I'm not sure. We haven't put any IPv6 addresses in there, just v4, and it's the v4 connections that are being denied. Unfortunately, I can't figure this out. We need to be able to talk to this DC from the main campus. Is there some kind of site-based SMB-level filtering going on? I can talk to the DC's on campus just fine, but that's over that v6 tunnel. I don't have access to a regular machine on that remote subnet, which limits my ability to test.

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

  • Configuration for Router Behind Uverse 2Wire

    - by Nori
    I have a 2Wire Uverse router(RG). I'm not particularly fond of it and want to use it as a modem only. I have a Linksys router with Tomato firmware on it and want that to be configured as the router. Most of the "guides" I've seen in my searches has been to enable DMZ Plus mode, but I don't see a way to make that work while my router has a static IP address. I played with it for quite some time yesterday and didn't see a way to get it to work. Then I ran across a setting in broadband for configuring another network. I played with that for a little bit but ran out of time and couldn't get it to work. So my question is for anyone out there who has Uverse and successfully setup a Tomato based firmware router behind the RG. How did you get it configured? I'm sure if I continued playing with it I could get it to work, but if someone out there already has it working then that would make my life easier. Thoughts? Thanks.

    Read the article

  • Disabled FRS replication on a DFS link, but the targets still list the replica set in their FRS conf

    - by Graeme Donaldson
    It's been a while since I've had to deal with the wonders of FRS, so I'm doing some testing to refresh my memory. This is what I've done so far. I am stuck with FRS rather than DFS-R for the moment since not all of my link targets are running R2. Created a domain-based DFS root, hosted on 4 servers. Created a DFS link under the root, targeted at 2 servers. The shares on both servers were empty. Dropped about 500MB of data into the target folder on one server and waited for replication to complete. Added/removed/modified files on both targets and confirmed that changes are replicated within a few seconds. Deleted the contents of the target folder on 1 server and waited for the other server to replicate the deletion. All of this worked perfectly, so now I want to remove my DFS link since I only created it for testing purposes. This is where it gets weird. I'm pretty sure that in the past I've disabled replication on the DFS link and after a short amount of time each target would log an info event in the FRS event log, something along the lines of "this server is no longer a member of replica set X". I have waited about 3 hours and I haven't seen this happen. ntfrsutl ds tells me that the server is not a member of any set, which is expected because when I disable replication on the link, the AD attributes on the computer object are removed. The weird part is... ntfrsutl sets still shows me the replica set, with all the properties, etc. So it seems like the FRS-related attributes of the target server's AD object are gone, but the FRS service for some reason hasn't removed the replica set. Can anyone see what I have done wrong?

    Read the article

  • Cisco IOS policy route for router originated VPN traffic

    - by Paul
    We have a Cisco IOS router with two DSL connections. One of them is intended for general traffic (ADSL), the other for VPN links (BDSL) and various other traffic. So the default route is the ADSL link, and we have a combination of static routes for the VPN traffic, and policy routes for other traffic types that should go out the BDSL link. For site to site traffic, this is fine, we just static route the public IPs and remote networks out of the BDSL line. The policy based routing works fine for any internal traffic that matches an ACL. The problem is now that there are remote VPN sites originating from dynamic addresses, so we cannot use static routes. The replies to incoming ISAKMP requests are following the default route out of the ADSL (despite there being no crypto map on that interface). I want to route the outgoing VPN traffic out of the BDSL. I have tried adding udp/500 and esp to and from the route-map acl that pushes traffic out of the BDSL line, but it doesn't match, presumably because the route-map happen earlier than the IPSec stuff. Any ideas how I can do this? IOS ver: 12.4.13T.

    Read the article

  • Dropped connections between Linux Servers in Data Center

    - by Emil H
    I have a number of linux servers at a us-based datacenter. The servers were installed by the hosting company, and are running fedora core. We're experiencing problems with dropped connections. The issue seems to be that when we attempt to connect to one of the other servers after a period of inactivity, the first connection attempt will fail, and sometimes the second. However, after that the connection succeds and it works for a while. This happens for both mysql connections and raw socket connections, but only seems to occur when connecting to some of our servers. The confusing part is that it some of the servers for which we see different behaviors have identical hardware configuration and software. For example, it happens when connecting to a server called mysql2, but not for a server called mysql3. These servers were installed at the same time, but the same specifications. The problem can be reproduced somewhat reliably, but only after waiting fifteen minutes to half an hour. This makes it hard to diagnose, and even harder since I'm not really sure what to look for. I realize that connections sometimes failed, and that we should write our applications to compensate for this but these servers all in the same data center. Why would it matter if two servers haven't communicated for a while? Does anybody have an idea what might be causing this? Is it a server configuration problem or a network problem that I should contact the hosting company about. What do I tell them to look for? Unfortunately our experience has been that the support staff doesn't investigate problems in depth unless we give them detailed directions.

    Read the article

  • Does fast typing influence fast programming?

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

< Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >