Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 572/1485 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Rebuilding LVM after RAID recovery

    - by Xiong Chiamiov
    I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition. There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that. After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message. Is there any hope for me to recover my data, or have I completely mucked it up?

    Read the article

  • In Windows 7, why won't my display stay off despite the power settings saying it should?

    - by Jer
    I'm completely stumped by this. My simple use case is that when I'm in bed, I use a cordless mouse to browse the web, watch videos, etc. - the monitor is across the room. When I'm going to sleep, I want to shut the monitor off. I also want to be able to turn it back on in the morning. I just want to turn the monitor off and on using only the mouse. I thought of creating a power setting that turned the monitor off asap (the shortest amount of time is one minute; that's fine). I have one that does this. It worked great for almost a year on my old XP machine, and for about four months on my new Windows 7 laptop (which I essentially use as a desktop). All of a sudden a couple weeks ago, it just stopped working - my monitor won't turn off on its own anymore. Here are the settings: I tried other options. Based on the advice here I tried nircmd. This seemed great. I created a shortcut with the command line: "C:\Program Files\nircmd\nircmd.exe" cmdwait 1000 monitor off I click this, and in one second the monitor goes off. However about five seconds later it turns back on, and I've been extra careful to make sure the mouse isn't moving. I have no idea what's going on. Based on both of these things, my only guess is that something could be running in the background which somehow makes the computer think it's in use. I've tried killing as many programs as possible but I still get the same behavior. Any advice? I'm mainly curious about how to debug, but am open to other suggestions about turning the monitor off and on with just the mouse as well.

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Converting a Windows 2003 server

    - by Jim Bass
    We have a legacy database system based upon MS SQL running on Windows Server 2003. The client software will only run on Windows XP. We have recently had success converting a client into a virtual machine and running it under Fusion on Mac minis. So far, it is working incredibly well. So well, in fact, that we are now considering trying to convert the server to a virtual machine. This raises several questions, though: 1. The server uses a raid array. Does the VM virtualize the raid array? I only ask because in my experience Windows products don't like it when you change core hardware. 2. Is there any reason why running SQL server on a virtual machine won't work? It will be up 24/7. 3. Is there a different converter for servers? 4. Will I have to track down the licensing for MS SQL and Server 2003 or will they come across ok? 5. The company that designed the software is no longer in business. There is some fear that the software is somehow tied to the hardware configuration. We bought the hardware, but their engineers came out and configured the system. Will the virtual machine be able to spoof particular chip sets? Thanks! Jim Bass

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • Can My Personal GMail Query A Remote LDAP Server?

    - by Maarx
    I have a personal GMail account, from which I frequently send e-mail to a great many various users of a specific business. The corporation has been kind enough to provide me with the credentials to access their LDAP server, with which I would like my GMail web client to be able to auto-complete partial addresses or names for which that LDAP server has an entry. Is there any way I can get a personal GMail account (or it's corresponding entire Google account) account to incorporate an LDAP server into it's Contacts? If I cannot get it to query dynamically and on-demand, is there an idiot-proof way (assuming the client permits, which they may not) to query the LDAP server for it's entire database, save it, and bulk import it to GMail? Perhaps, even, something I could set to repeat periodically (weekly, perhaps), without human interaction? If I did the latter, I assume it would be trivial to import all of these contacts under a single category that could be easily manipulated from within the GMail web-based client. I have been a staunch user and supporter of the GMail web-based client since it's instantiation, but this one is kind of a deal-breaker for me. If it's impossible, what do you suggest I do?

    Read the article

  • make file readable by other users

    - by Alaa Gamal
    i was trying to make one sessions for my all subdomains (one session across subdomains) subdomain number one auth.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; $_SESSION['stop']='stopsss this'; print_r($_SESSION); subdomain number two anscript.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; print_r($_SESSION); Now when i visit auth.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array ( [stop] => stopsss this ) And when i visit anscript.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array () session id is same! but session is empty after two days of failed trys, finally i detected the problem the problem is in file promissions the file is not readable by the another user session file on my server -rw------- 1 auth auth 25 Jul 11 11:07 sess_06pqdthgi49oq7jnlvuvsr95q1 when i make this command on the server chmod 777 sess_06pqdthgi49oq7jnlvuvsr95q1 i get the problem fixed!! the file is became readable by (anscript.site.com) So, how to fix this problem? How to set the default promissions on session files? this is the promissions of the sessions directory Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Auto Launching PHP-FPM

    - by Seth
    My plist file <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd" > <plist version='1.0'> <dict> <key>Label</key><string>org.macports.php-fpm</string> <key>ProgramArguments</key> <array> <string>/opt/local/bin/daemondo</string> <string>--label=php-fpm</string> <string>--start-cmd</string> <string>/opt/local/sbin/php-fpm</string> <string>;</string> <string>--pid=fileauto</string> <string>--pidfile</string> <string>/opt/local/var/run/php-fpm/php-fpm.pid</string> </array> <key>Debug</key><false/> <key>Disabled</key><true/> <key>OnDemand</key><false/> </dict> </plist> After rebooting, it's not loading up automatically. I still have to manually start php-fpm. I have tried unloading and adding RunAtLoad etc. with no luck and tried both these launchctl commands. sudo launchctl load -F /Library/LaunchDaemons/org.macports.php-fpm.plist sudo launchctl load -w /Library/LaunchDaemons/org.macports.php-fpm.plist

    Read the article

  • How can I install git on RHEL 6?

    - by JR.Xyza
    I'm trying to install Git on a RHEL6 development server, I have experience with Ubuntu but this is my first time working with RHEL (I'm a developer trying to fill in for a recently departed Linux Sysadmin). I've set up two additional repos (EPEL and IUS) for other packages needed for a Magento install. Output of yum repolist: [root@box]# yum repolist Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,841 ius IUS for RHEL 6Server - x86_64 135 Most of what I've read indicates a simple 'yum install git' should work with EPEL enabled, but I get the dreaded [root@box]# yum install git Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package git available. Error: Nothing to do Same goes for git-daemon, etc. I've tracked down a number of git RPMs such as this one at repoforge but they require a train of dependencies that seems to never end. I've also toyed with compiling it manually but the rabbit hole to get make working seems to go even deeper. I'm convinced there's a simple oversight somewhere keeping me from being able to install from the EPEL repo, but I'm a rookie at all this. Thanks in advance for help/pointers/additional resources.

    Read the article

  • Choice of filesystem for GNU/Linux on an SD card

    - by gspr
    Hi. I have am embedded ARM-based system running on an SD card. It's currently Debian GNU/Linux using ext3 as filesystem. As I'm about to reinstall the system, I started wondering about changing to a more flash-friendly filesystem. I've heard about JFFS2, YAFFS2 and LogFS, and they all seem suited to the job. Which one would you recommend? Also, I've heard there have been a lot of ext4 improvements to better suit SSD disks; am I to interpret that as running ext4 should be just fine? What do I need to think especially about in that case? I guess the usage of the system is important. But for the sake of generality, imagine it'll do standard desktop stuff (even though it is infact a small ARM-based system). Thanks for any replies. Edit: Wikipedia tells me (in a "citation needed" statement) that Removable flash memory cards and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system does not add any benefit. Thus, I'm leaning towards sticking with an ext filesystem.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Network structure --> Server 2k8r2 <--> Livebox <--> Router <--> Other PCs

    - by Yusuf
    I have a Livebox connection to the Internet and I have set up my network as follows: - Livebox <--> Win2k8R2 Server - Livebox <--> Netgear N150 Router - Router <--> Other PCs Therefore, in my LAN, - the Livebox has IP address 192.168.1.1, - the Router 192.168.1.12 (when accessed from the Livebox or the server), - the Router 10.0.0.1 (when accessed from the PCs connected to the Router), - the server 192.168.1.2, - the PCs 10.0.0.x I was using a previous configuration, which was as follows: - Livebox <--> Netgear N150 Router - Router <--> Win2k8R2 Server - Router <--> Other PCs Everything was simple, and I just had to forward all ports for incoming connection on the Livebox to the Router, and then forward the specific ports to the Server as needed (it must be however noted that any server I use is found on the Win2k8R2 server itself). In this previous configuration, the IP addresses were as follows: - Livebox 192.168.1.1 - Router 192.168.1.12 (when seen from Livebox) - Router 10.0.0.1 (when seen from server & PCs connected to it) - Server 10.0.0.2 - PCs 10.0.0.x So now of course, my port-forwarding does not work anymore since the server is not connected (directly) to the Router. What I would like to know is how do I configure the Livebox and Router to still have the features like before? From what I understand of networks (which is very limited, btw), I see these options: Make the router assign IPs like 192.168.1.x (but then I want the forwarding to be done from the router itself, is it possible?) The forwarding on the router to the server uses IP address 10.0.0.2. I could change it to 192.168.1.2 (Is that even possible, does it work?) Forward all ports from the Livebox itself to the server, and then manage them there (Is software-based port-forwarding as secure as hardware-based?)

    Read the article

  • In which order does Excel process its formulae?

    - by dwwilson66
    I've got a fairly large spreadsheet with major calculations going on, and it's starting to slow down every time a value that's part of a calculated field is modified. I'm in the process of optimizing the file, adding arrays where I can, and seeing where I can shave off a few milliseconds here and there. Let's say there's data in Columns A-H. Column H is set based on relationships between values in Columns A, B and C, which change dynamically from an outside program. Users enter the data in Column F. Formulas in D & E calculate relationships between F & H and H & D, respectively. How does Excel manage formulae in the case, for instance, where they're dependent on data further into the sheet? Will my value in H be available the first time that the formulae in D & E calculate? or, will D & E calculate based on an old value for H, because H's update hasn't happened yet? Are there any efficiencies to be gained by positioning dependencies in particular rows or columns in the speadsheet? Do positions above and left the current position get processed sooner than things below and to the right?

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • Looking for easiest, most simple solution to run a customised DNS Server for my local network on Windows 7.

    - by Jamie G
    I need to forward some websites, such as http://testing.server/ to an fixed IP address on my local network. I can do this easily on one computer using the hosts file. However, I need this to work for all machines on my network. I think the best way to do this will be to setup my own DNS Servers and add the custom DNS settings there. However, I'm looking for the simplest way possible to do this - I really don't want to spend hours setting up Unix Servers and running tricky terminal based scripts just to do this! My server is a standard Windows 7 machine. My dream would be a nice simple windows program with a GUI where I could input my ISP's DNS server and it would use those records, unless I had specifically set up my own DNS for a domain to use instead. If it had a web based admin system that was accessible from another computer on the network that would be even better. Does anyone know of anything that can do this? Many thanks indeed.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >