Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 731/798 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • Exchange 2007 Standard Edition

    - by Phrontiste
    We Have : Exchange 2007 Standard Edition IBM System X3650 2 x Intel Xeon 5430 2.66 GHz Version 8.1 Build 240.6 Mailbox, Hub Transport, Client Access Role Installed on One Box Total Number of Mailboxes : 110 - 130 6 Physical Disks Disk 0,1 (68 GB) = Raid-1, OS Partition ( C: Partition) Disk 2,3 (279GB) = Raid-1, Exchange Database (First and Second Storage Groups) ( D: Partition ) Disk 4,5 (68 GB) = Raid-1, Exchange Transaction Logs ( E: Partition ) Setup: Storage Groups : D:\First Storage group\Mailbox database.edb Storage Groups : D:\Second Storage Group\Public Folder Database.edb Transaction Logs : E Partition Problem 1: On our D Partition (Mailbox Database Partition), total size is 279 GB, free space remaining is 64.7 GB, when I select the first storage group and second storage group folders and right click properties they report a size of 165 GB. Mailbox database reports a size of 157GB when right clicked Properties. where as the size displayed in the folder is 164,893,456 KB So, we are missing around 50-54 GB, there is nothing else on these drives, no page file, nothing at all. The partition housing the Transaction logs is reporting the sizes accurately. Any suggestions / fixes on the above ? Problem 2: As you may have already read in Problem 1, the size of the mailbox database is 157GB or 164GB reported; which is not recommended, a) What would you suggest we should do to divide mailboxes in storage groups on this same server ? b) How would we move mailboxes into different storage groups ? c) This is the information store size ? (Am I right in thinking that this is not recommended) d) Having multiple storage groups with one Mailbox DB in each, would that reduce the size of the Information Store? e) Any suggestions / how-to reduce the size of information store ? We didn't install this, we have inherited this - what other recommendations you can make in order to keep ourselves better prepared for any server disaster? We are backing up with Yosemite Backup on RD1000 (320GB) at the moment, which is backing up successfully, flushing the logs daily. We haven't done a test restore YET. I have tried to provide as much info as possible, please let me know if you need further info. Also, we haven't yet faced any problems in mailflow, access speeds, everything is working fine, we have two to five people accessing OWA or Outlook via vpn only. Thanks for your time to read the above - will look forward to your expert suggestions.

    Read the article

  • Looking for a Software to harden Windows machines

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So IF anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • After compiling PHP, I get mod_fcgid: error reading data from FastCGI server

    - by user34295
    I'm trying to add multiple PHP version in Plesk 12. Switching my domain to the new version PHP 5.4.29 result in this error: (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server Here is phpinfo() of the complied PHP version, obtained running php54-cgi index.php from the terminal. The same script placed under document root doesn't work in FastCGI. How can I debug/try to figure out what's the error? Currently running CentOS 6.5 x64, Plesk v12.0.18_build1200140529.2, PHP 5.5.13. I've downloaded PHP 5.4.29: cd /usr/local/src curl -O http://it1.php.net/distributions/php-5.4.29.tar.gz cd php-5.4.29 And configured with: ./configure \ --prefix=/usr/local/php54 \ --with-bz2 \ --with-config-file-path=/usr/local/php54/etc \ --with-config-file-scan-dir=/usr/local/php54/etc/php.d \ --with-curl \ --with-gd \ --with-gettext \ --with-iconv \ --with-layout=PHP \ --with-libxml-dir=/usr/local/php54 \ --with-mhash \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-openssl \ --with-pdo-mysql=mysqlnd \ --with-readline \ --with-xsl \ --with-zlib \ --enable-calendar \ --enable-cgi \ --enable-exif \ --enable-ftp \ --enable-intl \ --enable-mbstring \ --enable-pcntl \ --enable-shmop \ --enable-sockets \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --enable-wddx \ --enable-zip Then: make && make install Installing PHP CLI binary: /usr/local/php54/bin/ Installing PHP CLI man page: /usr/local/php54/php/man/man1/ Installing PHP CGI binary: /usr/local/php54/bin/ Installing PHP CGI man page: /usr/local/php54/php/man/man1/ Installing build environment: /usr/local/php54/lib/php/build/ Installing header files: /usr/local/php54/include/php/ Installing helper programs: /usr/local/php54/bin/ program: phpize program: php-config Installing man pages: /usr/local/php54/php/man/man1/ page: phpize.1 page: php-config.1 Installing PEAR environment: /usr/local/php54/lib/php/ [PEAR] Archive_Tar - installed: 1.3.11 [PEAR] Console_Getopt - installed: 1.3.1 warning: pear/PEAR requires package "pear/Structures_Graph" (recommended version 1.0.4) warning: pear/PEAR requires package "pear/XML_Util" (recommended version 1.2.1) [PEAR] PEAR - installed: 1.9.4 Wrote PEAR system config file at: /usr/local/php54/etc/pear.conf You may want to add: /usr/local/php54/lib/php to your php.ini include_path [PEAR] Structures_Graph- installed: 1.0.4 [PEAR] XML_Util - installed: 1.2.1 /usr/local/src/php-5.4.29/build/shtool install -c ext/phar/phar.phar /usr/local/php54/bin ln -s -f /usr/local/php54/bin/phar.phar /usr/local/php54/bin/phar Installing PDO headers: /usr/local/php54/include/php/ext/pdo/ Copied php.ini-production to /usr/local/php54/etc/php.ini and added a new handler in Plesk: /usr/local/psa/bin/php_handler --add -displayname 5.4.29 -path /usr/local/php54/bin/php-cgi -phpini /usr/local/php54/etc/php.ini -type fastcgi -id php54 Symbolic linking: ln -s /usr/local/php54/bin/php /usr/local/bin/php54 ln -s /usr/local/php54/bin/php-cgi /usr/local/bin/php54-cgi New installed version: php54-cgi -m [PHP Modules] bz2 calendar cgi-fcgi Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext hash iconv intl json libxml mbstring mhash mysql mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix readline Reflection session shmop SimpleXML sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter xsl zip zlib [Zend Modules]

    Read the article

  • X:\ is not accessible. Insufficient system resources exist to complete the requested service. Help [

    - by Katherine
    I keeping getting the error message from above on multiple computers that I administer. I wasn't sure if I should be posting this on SuperUser or ServerFault so my apologizes if it should go there... Basically, I have at least 5 computers of varying ages (some fresh out of the box!) throwing the above error. X:\ is one of our network drives that is mapped for users. Most of the time if you shut down the biggest application it will fix the problem, but it's becoming an increasing issue, and I can't keep running around fixing it manually. I have tried to do some research, but most of it just states the obvious without supplying a permanent fix. The machines are all running Win XP SP3, with at least 2gb of ram. Sorry for the delay in getting back to people... a lot of good questions. To respond back to people... It is a windows 2003 server that houses the file share. We have about 175 users, however i cannot state how many are actually accessing the information at a single moment. Considering that this is our largest file share, I would say that probably at least 100+. The files we work with are large, but not that big considering that we do a lot of graphical and video work. ~50mb. That being said, this is error occurs simply when trying to gain access to the server itself, not actual files. When I say close a program, I mean that it can be any program. It doesn't matter which program. It varies from machine to machine, and from day to day. Some days it is Firefox, some days it is Outlook, some days it is Excel. There doesn't seem to be a common bond behind which application could be causing the problem. Thank you for the articles, and the recommendation on paging files. I will have to look into that. None of our computers are set to hibernate, so I am going to rule that out.

    Read the article

  • How could I let Skydrive desktop sync to MicroSD in Windows 8 tablet?

    - by peSHIr
    I have a Samsung Slate 7 tablet with (now) Windows 8 on it. This machine has a 64 Gb SSD and I have a 64 Gb MicroSD card in it. I also have a Skydrive on my main Microsoft ID that contains about 45 Gb of content. With Windows and some development stuff installed, my Skydrive will not fit on the main drive of the tablet. (Besides, my idea was to keep data on the memory card anyway, to make it easier to repave the machine without data loss if need be.) My problem should now be clear: I want to install the Skydrive desktop app to sync my Skydrive to the MicroSD card. This is not possible, as Skydrive does not allow syncing files to removable drives. I have tried a number of things already, but none of them worked: Use the mklink command line tool to create a directory link/junction from a folder name on SSD to a folder on the MicroSD and then try to install Skydrive sync to the SSD link folder. Skydrive however still recognizes this as something it does not want to sync onto. The various different filter drivers mentioned on Agnipulse (including the Hitachi one) that should make windows see some or all of the removable drives in the system as fixed drives do not seem work on (64-bit) Windows 8: they either can't be installed, do nothing and/or cause Windows 8 to go into Automatic Repair mode when rebooting. The Lexar BootIt app seems to be meant to flip the relevant bit in the on-board drive controller of supported USB pen drives, but I tried it anyway. Of course it did nothing to how the MicroSD card was seen. I have now run out of ideas, it seems, and I was wondering if anyone here has a solution to let Windows 8 see the MicroSD memory card in my tablet as a fixed drive instead of removable drive, or some other way of getting the Skydrive desktop to sync my Skydrive data to that MicroSD card. And to be complete: this is not a duplicate question of this or this as those ask about getting USB drives multiple partitions to work on Windows XP. This question is specific about getting desktop Skydrive to sync to MicroSD card in Windows 8, which seems to be a question I have not seen on superuser so far.

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • Fix/Bypass "Cannot connect to the real website-blocked" error in Google Chrome with OpenDNS blocking

    - by George H
    I have a large problem with Chrome in my organisation. I use DNS to manage web site blocking, for sites which are not appropriate and are potentially a risk to the organisation where I do this. I only want to use Chrome over the network, as Internet Explorer has compatibility problems with some sites that we use (We cannot change this either or use different sites). Therefore using internet explorer is not a solution. I do not want to install a different browser, for multiple reasons. Mainly because of the difficulty of rewriting the customised add-ons that we use. However, recently, I have had lots of problems with Chrome SSL Errors. I cannot use my custom OpenDNS block pages, which uses the contact form to request an unblocking. Chrome often blocks OpenDNS for sites (a good example is Facebook) that request HTTPS. Some sites like https://internetbadguys.com (OpenDNS example) This means that chrome refuses to load the blocking page, explaining that the site is blocked. Instead they often call IT support, but they want a solution, as they are sick of getting lots of SSL errors. I have tried looking into ways to turning this off. I have tried: Typing "proceed". That didn't work. Typing "proceed", pressing enter. Didn't work I cannot find phishing and anti-malware any more in Chrome, from the internet guides. Not using HTTPS. However there is an automatic redirect to HTTPS on most sites. Therefore the error keeps coming up. Checking my clocks. They were correct. Does anyone have an idea on how to disabling, bypassing or working around this "feature"? EDIT: This is an example what I am talking about - I found that on google images. I do not block google. EDIT 2: My clocks are correct. I cannot stop using OpenDNS either. EDIT 3: My question is: How do I stop chrome from refusing to load pages that are blocked by OpenDNS, where the server has explicitly requested HTTPS.

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Postfix: How to apply header_checks only for specific Domains?

    - by Lukas
    Basically what I want to do is rewriting the From: Header, using header_checks, but only if the mail goes to a certain domain. The problem with header_check is, that I can't check for a combination of To: and From: Headers. Now I was wondering if it was possible to use the header_checks in combination with smtpd_restriction_classes or something similar. I've found a lot information about header_checks and multiple header fields, when searching the net. All of them basically telling me, that one can't combine two header for checking. But I didn't find any information if it was possible to only do a header check if a condition (eg. mail goes to example.com) was met. Edit: While doing some more Research I've found the following article which suggests to add a Service in postfix master.cf, use a transportmap to pass mails for the Domain to that service and have a separate header_check defined with -o. The thing is that I can't get it to work... What I did so far is adding the Service to the master.cf: example unix - - n - - smtpd -o header_checks=regexp:/etc/postfix/check_headers_example Adding the followin Line to the transportmap: example.com example: Last but not least I have two regexp-files for header checks, one for the newly added service, and one to redirect answers to the rewritten domain. check_headers_example: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 Obviously if someone answers, the mail would go to nirvana, so I have the following check_headers defined in the main postfix process: /To:(.*)<(.*)@mydomain.example.com>(.*)/ REDIRECT [email protected]$2 Somehow the Transport is ignored. Any help is appreciated. Edit 2: I'm still stuck... I did try the following: smtpd_restriction_classes = header_rewrite header_rewrite = regexp:/etc/postfix/rewrite_headers_domain smtpd_recipient_restrictions = (some checks) check_recipient_access hash:/etc/postfix/rewrite_table, (more checks) In the rewrite_table the following entries exist: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 All it gets me is a NOQUEUE: reject: 451 4.3.5 Server configuration error. I couldn't find any resources on how you would do that but some people saying it wasn't possible. Edit 3: The reason I asked this question was, that we have a customer (lets say customer.com) who uses some aliases that will forward mail to a domain, let's say example.com. The mailserver at example.com does not accept any mail from an external server that come from a sender @example.com. So all mails that are written from example.com to [email protected] will be rejected in the end. An exception on example.com's mailserver is not possible. We didn't really solve this problem, but will try to work around it by using lists (mailman) instead of aliases. This is not really nice though, nor a real solution. I'd appreciate all suggestions how this could be done in a proper way.

    Read the article

  • FreeBSD jail with IPFW with loopback - unable to connect loopback interface

    - by khinester
    I am trying to configure a one IP jail with loopback interface, but I am unsure how to configure the IPFW rules to allow traffic to pass between the jail and the network card on the server. I have followed http://blog.burghardt.pl/2009/01/multiple-freebsd-jails-sharing-one-ip-address/ and https://forums.freebsd.org/viewtopic.php?&t=30063 but without success, here is what i have in my ipfw.rules # vim /usr/local/etc/ipfw.rules ext_if="igb0" jail_if="lo666" IP_PUB="192.168.0.2" IP_JAIL_WWW="10.6.6.6" NET_JAIL="10.6.6.0/24" IPF="ipfw -q add" ipfw -q -f flush #loopback $IPF 10 allow all from any to any via lo0 $IPF 20 deny all from any to 127.0.0.0/8 $IPF 30 deny all from 127.0.0.0/8 to any $IPF 40 deny tcp from any to any frag # statefull $IPF 50 check-state $IPF 60 allow tcp from any to any established $IPF 70 allow all from any to any out keep-state $IPF 80 allow icmp from any to any # open port ftp (20,21), ssh (22), mail (25) # ssh (22), , dns (53) etc $IPF 120 allow tcp from any to any 21 out $IPF 130 allow tcp from any to any 22 in $IPF 140 allow tcp from any to any 22 out $IPF 150 allow tcp from any to any 25 in $IPF 160 allow tcp from any to any 25 out $IPF 170 allow udp from any to any 53 in $IPF 175 allow tcp from any to any 53 in $IPF 180 allow udp from any to any 53 out $IPF 185 allow tcp from any to any 53 out # HTTP $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state # deny and log everything $IPF 500 deny log all from any to any # NAT $IPF 63000 divert natd ip from any to any via $jail_if out $IPF 63000 divert natd ip from any to any via $jail_if in but when i create a jail as: # ezjail-admin create -f continental -c zfs node 10.6.6.7 /usr/jails/node/. /usr/jails/node/./etc /usr/jails/node/./etc/resolv.conf /usr/jails/node/./etc/ezjail.flavour.continental /usr/jails/node/./etc/rc.d /usr/jails/node/./etc/rc.conf 4 blocks find: /usr/jails/node/pkg/: No such file or directory Warning: IP 10.6.6.7 not configured on a local interface. Warning: Some services already seem to be listening on all IP, (including 10.6.6.7) This may cause some confusion, here they are: root syslogd 1203 6 udp6 *:514 *:* root syslogd 1203 7 udp4 *:514 *:* i get these warning and then when i go into the jail environment, i am unable to install any ports. any advice much appreciated.

    Read the article

  • Building a Mac/PC Network in a Dorm with Network Restrictions

    - by user70340
    I have been a Windows XP user for the last few years, but I recently bought a 15'' MacBook Pro for research purposes. I would like to set up a no-hassle Mac/PC Network at home so that I can access the internet on both computers and hardware between computers (i.e. a harddrive, or a mouse/keyboard with Synergy). Unfortunately, I live in a dorm with silly network restrictions so a solution is not straightforward. In particular: The dorm has a wired and wireless network, both which provide an internet connection. The wired network provides way faster internet (download speeds of 15 MB/s vs. 2 MB/s on wireless), so I would like to somehow exploit this, at least on my PC for Bittorrent :) Multiple devices can connect to the wireless network, but cannot "see" each other on the network (so software like Synergy would not work). Only 1 MAC address can connect to the wired network at a time. Ideally I would just connect a wireless router to the wired network and then have both the Mac and the PC on that, but the 1 MAC address restriction will not allow the both computer to access the internet simultaneously. I cannot think of a way to bypass this restriction (though I'm not network savvy), so I am planning to create a private no-internet network to allow the devices to see each other and share hardware. Here are some thoughts. I would appreciate any feedback at all! If I build a private wireless network: (first choice) I will use a wireless router that is not connected to the internet. My PC and Mac will be connected to each other wirelessly. I can then connect the PC to the internet via a wired network, but then the Mac will not have internet access as its wireless card is already in use. In this case, could I stream internet access from the PC to the Mac via the wireless network? Or could I buy a USB wireless card for the Mac so that it can connect to both my private network and the dorm network? If I build a private wired network: (second choice) Then both the PC and the Mac will connect to the internet wirelessly, which means I cannot take advantage of the faster download speeds.

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • Exchange 2010 forwarded emails by external servers being blocked

    - by MadBoy
    Our users were getting spam messages from their own accounts (same domain/login for example [email protected] to [email protected]). This is preety standard trick and I decided to block it so that anonymous users can't send emails as @company.com. This brought some problems on us like our printers not being able to send emails etc but I solved it with secondary smtp receiver on different port with ip restrictions. However it seems to affect forwarding by some e-mail servers as well: Hi. This is the qmail-send program at home.pl. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. : 89.14.1.26 failed after I sent the message. Remote host said: 550 5.7.1 Client does not have permissions to send as this sender --- Below this line is a copy of the message. Return-Path: Return-Path: Received: from mail.company.com [89.14.1.26] (HELO mail.company.com) by company.ho.pl [79.93.31.43] with SMTP (IdeaSmtpServer v0.70) id 488fcb01c2f069d9; Tue, 3 Jan 2012 09:46:55 +0100 Received: from EXCHANGE1.COMPANY ([fe80::d425:135f:b655:1223]) by EXCHANGE2.COMPANY ([fe80::193f:51ac:9316:cb27%14]) with mapi id 14.01.0355.002; Tue, 3 Jan 2012 09:46:55 +0100 From: =?iso-8859-2?Q?MadBoy?= So basically server forwards it without affecting email address it was send with and our servers treat it like spam. I used this command to block things: Get-ReceiveConnector "DEFAULT Exchange2" | Get-ADPermission -user "NT AUTHORITY\Anonymous Logon" | where {$_.ExtendedRights -like "ms-exch-smtp-accept-authoritative-domain-sender"} | Remove-ADPermission Is there anyway I can keep on receiveing things like forwards but be able to block things (except some dedicated antispam solution - this will be added later). Also how do I "reassing" back the permissions that was removed? EDIT to clarify: I have a domain domain.com configured as Authorative. Couple of our users are on project for differentcompany.com which is not on our servers or anywhere close. Now when they send an email from their accounts lets say [email protected] to [email protected] that special alias is configured so that any email it receives it forwards to multiple people including a group alias at our domain [email protected] and that group alias puts the email in users mailboxes. After the email is forwarded by [email protected] and it reaches our server it is denied because the forwarding done by the "external" server doesn't affect user information so for the server it seems like the [email protected] was actually sender and it treats it as spam and denies it. The server at differentcompany.com just adds itself to the header that it passed thru it and doesn't modify sender at anyway (seems like this is how forwarding works). Although I could probably allow this particular server as allowed to relay but this would seem to affect more servers/users as anyone can setup forwarding on their email back to our domain...

    Read the article

  • can't use periods in ServerName [Lion Apache installation]

    - by punchfacechamp
    I can access my host like this… http://keggyshop but can't use periods… http://keggyshop.dev here's my virtual host directive… <VirtualHost *:80> ServerName keggyshop ServerAlias keggyshop.dev DocumentRoot "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public" <Directory "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public"> Options Includes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> host file 127.0.0.1 keggyshop 127.0.0.1 keggyshop.dev traceroute for keggyshop… user$ traceroute keggyshop traceroute to keggyshop (192.168.1.184), 64 hops max, 52 byte packets 1 keggyshop (192.168.1.184) 1.188 ms 0.683 ms 0.747 ms traceroute for keggyshop.dev… user$ traceroute keggyshop.dev traceroute: Warning: keggyshop.dev has multiple addresses; using 184.106.15.239 traceroute to keggyshop.dev (184.106.15.239), 64 hops max, 52 byte packets 1 * 192.168.1.1 (192.168.1.1) 0.856 ms 0.568 ms 2 10.81.192.1 (10.81.192.1) 15.232 ms 7.002 ms 7.936 ms 3 gig-0-3-0-6-nycmnya-rtr2.nyc.rr.com (24.29.97.122) 7.962 ms 7.813 ms 7.712 ms 4 bun101.nycmnytg-rtr001.nyc.rr.com (184.152.112.107) 10.999 ms 14.001 ms 15.466 ms 5 bun6-nycmnytg-rtr002.nyc.rr.com (24.29.148.250) 11.231 ms 17.321 ms 12.745 ms 6 107.14.19.24 (107.14.19.24) 13.972 ms 11.704 ms 16.477 ms 7 ae-1-0.pr0.nyc30.tbone.rr.com (66.109.6.161) 9.237 ms 11.896 ms 107.14.19.153 (107.14.19.153) 7.481 ms 8 xe-5-0-6.ar2.ewr1.us.nlayer.net (69.31.94.57) 16.682 ms 11.791 ms 11.981 ms 9 ae3-90g.cr1.ewr1.us.nlayer.net (69.31.94.117) 12.977 ms 15.706 ms 9.709 ms 10 xe-5-0-0.cr1.ord1.us.nlayer.net (69.22.142.74) 30.473 ms 30.497 ms 31.750 ms 11 ae1-20g.ar1.ord6.us.nlayer.net (69.31.110.250) 36.699 ms 50.785 ms 35.957 ms 12 as19994.xe-1-0-7.ar1.ord6.us.nlayer.net (69.31.110.242) 34.723 ms 31.118 ms 29.967 ms 13 coreb.ord1.rackspace.net (184.106.126.138) 30.471 ms corea.ord1.rackspace.net (184.106.126.136) 33.392 ms 35.210 ms 14 core1-coreb.ord1.rackspace.net (184.106.126.129) 32.453 ms core1-corea.ord1.rackspace.net (184.106.126.125) 32.020 ms core1-coreb.ord1.rackspace.net (184.106.126.129) 32.417 ms 15 core1-aggr401a-3.ord1.rackspace.net (173.203.0.157) 31.274 ms 34.854 ms 30.194 ms

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • Do RAID controllers commonly have SATA drive brand compatibility issues?

    - by Jeff Atwood
    We've struggled with the RAID controller in our database server, a Lenovo ThinkServer RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the ServeRAID 8k. We have patched this ServeRAID 8k up to the very latest and greatest: RAID bios version RAID backplane bios version Windows Server 2008 driver This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the change history is just.. well, scary. We've tried both write-back and write-through strategies on the logical RAID drives. We still get intermittent I/O errors under heavy disk activity. They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools. We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate. When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that the Western Digital SATA hard drives I chose were somehow incompatible with the ServeRAID 8k controller. Buying 6 new hard drives was one of the cheaper options on the table, so I went for 6 Hitachi (aka IBM, aka Lenovo) hard drives under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with. Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems! While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives. So my question is, is this sort of SATA drive incompatibility common with RAID controllers? Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

  • Network use of Gaming PC

    - by Matthew Patrick Cashatt
    Background After YEARS of waiting, I built the custom gaming PC of my dreams: Intel i7 - 975 Extreme Edition 3.3ghz (overclocked to 4.0) ATI Radeon 5970 2gb Corsair 256 gb SSD Drive 2 TB Sata II 3.0 7200rpm data drive 12 GB Kingston Hyper-X (1600mhz) DDR3 Windows 7 Ultra 64 bit And so on. . . Problem I hooked this beast up to our home theater and settled in for a great gaming season only to realize a couple of drawbacks: It's hard to accurately wax bad guys using a keyboard in your lap whilst reclined on your couch (and using a wireless keyboard). It's hard to read the text on the screen (i.e. menus, etc). I find that a 1:1 ratio (screen diagonal inch to inch away from screen) is optimum, but using the home theater, it's more like 1:3 which has me squinting unless I sit on the coffee table. The wife always seems to want the TV the same time I do and, unfortunately "Real Housewives of Beverly Hills" and Battlefield BC don't mix. I am losing the battle in the home theater room, but the PC has to stay there (long story). So, this leaves me with the option of playing in my home office which is about 30 feet away from the home theater. I am a software developer so I have a pretty decent set up in my office--multiple 1080p monitors, HP Envy 17 which can run games like Crysis in 720p with out stammering too much. Also, I can game very comfortably at my desk in the office. Still, even though the set up in my office can run games well enough, I don't want to regress to that when I have worked YEARS for an awesome gaming PC that can run everything on ultra high settings. My Question What are my options for running my games on the beastly desktop in the Home Theater, but physically playing in my office about 30 feet away? A really long HDMI cable? LAN/RDC? Details that May Help We have an open crawlspace so running cable from HT room to office is no problem. I already have networked the house with a LAN Any help is GREATLY appreciated. Thanks, Matt

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >