Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 132/389 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update [closed]

    - by andrewcameron
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Recovering data from mangodb raw files

    - by Jin Chen
    we use mongodb for our database and set the replset(two servers), but we mistakenly deleted some raw files that under /path/to/dbdata on both servers, after we used tool to get back the deleted files(we ran the extundelete on both server and mix the result together), like database.1, database.2 etc. we could not start the mongod, it raised the following error when starting mongod or executing mongodump, here is the console output: root@mongod:/opt/mongodb# mongodump --repair --dbpath /opt/mongodb -d database_production Thu Aug 21 16:22:43.258 [tools] warning: repair is a work in progress Thu Aug 21 16:22:43.258 [tools] going to try and recover data from: database_production Thu Aug 21 16:22:43.262 [tools] Assertion failure isOk() src/mongo/db/pdfile.h 392 0xde1b01 0xda42fd 0x8ae325 0x8ac492 0x8bd8e0 0x8c1c51 0x80e345 0x80e607 0x80e6a4 0x6db92a 0x6dc1ff 0x6e0db9 0xd9e45e 0x6ccdc7 0x7f499d856ead 0x6ccc29 mongodump(_ZN5mongo15printStackTraceERSo+0x21) [0xde1b01] mongodump(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda42fd] mongodump(_ZNK5mongo7Forward4nextERKNS_7DiskLocE+0x1a5) [0x8ae325] mongodump(_ZN5mongo11BasicCursor7advanceEv+0x82) [0x8ac492] mongodump(_ZN5mongo8Database19clearTmpCollectionsEv+0x160) [0x8bd8e0] mongodump(_ZN5mongo14DatabaseHolder11getOrCreateERKSsS2_Rb+0x7b1) [0x8c1c51] mongodump(_ZN5mongo6Client7Context11_finishInitEv+0x65) [0x80e345] mongodump(_ZN5mongo6Client7ContextC1ERKSsS3_b+0x87) [0x80e607] mongodump(ZN5mongo6Client12WriteContextC1ERKSsS3+0x54) [0x80e6a4] mongodump(_ZN4Dump7_repairESs+0x3a) [0x6db92a] mongodump(_ZN4Dump6repairEv+0x2df) [0x6dc1ff] mongodump(_ZN4Dump3runEv+0x1b9) [0x6e0db9] mongodump(_ZN5mongo4Tool4mainEiPPc+0x13de) [0xd9e45e] mongodump(main+0x37) [0x6ccdc7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f499d856ead] mongodump(__gxx_personality_v0+0x471) [0x6ccc29] assertion: 0 assertion src/mongo/db/pdfile.h:392 Thu Aug 21 16:22:43.271 dbexit: Thu Aug 21 16:22:43.271 [tools] shutdown: going to close listening sockets... Thu Aug 21 16:22:43.271 [tools] shutdown: going to flush diaglog... Thu Aug 21 16:22:43.271 [tools] shutdown: going to close sockets... Thu Aug 21 16:22:43.272 [tools] shutdown: waiting for fs preallocator... Thu Aug 21 16:22:43.272 [tools] shutdown: closing all files... Thu Aug 21 16:22:43.273 [tools] closeAllFiles() finished Thu Aug 21 16:22:43.273 [tools] shutdown: removing fs lock... Thu Aug 21 16:22:43.273 dbexit: really exiting now my env: 1) Debian 3.2.35-2 x86_64(it's a XEN virtual machine) 2) mongodb 2.4.6 and we did not delete the .0 and .ns files we tried to create a new database with the same name and copy these db.ns and db.2, db.3 to the new db, we still met the same error. is there any way to check the valid of raw .ns and datafiles, and how to recover the database?

    Read the article

  • weird routes automatically being added to windows routing table

    - by simon
    On our windows 2003 domain, with XP clients, we have started seeing routes appearing in the routing tables on both the servers and the clients. The route is a /32 for another computer on the domain. The route gets added when one windows computer connects to another computer and needs to authenticate. For example, if computer A with ip 10.0.1.5/24 browses the c: drive of computer B with ip 10.0.2.5/24, a static route will get added on computer B like so: dest netmask gateway interface 10.0.1.5 255.255.255.255 10.0.2.1 10.0.2.5 This also happens on windows authenticated SQL server connections. It does not happen when computers A and B are on the same subnet. None of the servers have RIP or any other routing protocols enabled, and there are no batch files etc setting routes automatically. There is another windows domain that we manage with a near identical configuration that is not exhibiting this behaviour. The only difference with this domain is that it is not up to date with its patches. Is this meant to be happening? Has anyone else seen this? Why is it needed when I have perfectly good default gateways set on all the computers on the domain?!

    Read the article

  • HAproxy with MySQL Master-Master Replication incredibly slow

    - by Yayap
    I have two MySQL servers in multi-master mode, with an HAproxy machine for simple load balancing/redundancy. When I am connected to one of the servers directly and try to update about 100,000 entries, it is completed including replication in about half a minute. When connecting through the proxy it takes usually over three whole minutes. Is it normal to have that type of latency? Is something amiss with my proxy configuration (included below)? This is getting really frustrating as I assumed the proxy would do some sort of load balancing, or at least have little to no overhead. #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 # chroot /var/lib/haproxy # pidfile /var/run/haproxy.pid maxconn 4096 user haproxy group haproxy daemon #debug #quiet # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global #option tcplog option dontlognull option tcp-smart-accept option tcp-smart-connect #option http-server-close #option forwardfor except 127.0.0.0/8 #option redispatch retries 3 #timeout http-request 10s #timeout queue 1m timeout connect 400 timeout client 500 timeout server 300 #timeout http-keep-alive 10s #timeout check 10s maxconn 2000 listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option tcpka option httpchk server db01 192.168.15.118:3306 weight 1 inter 1s rise 1 fall 1 server db02 192.168.15.119:3306 weight 1 inter 1s rise 1 fall 1

    Read the article

  • How can I set up a local nameserver and modify DNS zones on it?

    - by Joe Hopfgartner
    This is a follow up to this question. I am having an issue with a Router that doesn't support hairpinning properly. See the link above for details. Now I want to set up a local DNS server that Hosts in our LAN can use to resolve public Hostnames (usual webbrowsing... ). Additionally I want to modify certain zones. In our LAN we have some servers serving resources that are not available in our public dns zone. We always have to configure our local LMHost files accordingly. For example we have a staging installation with a new feature running on a local Webserver, and we cannot access it with the IP directly because the website runs in a named virtual host container, we have to configure LMHost file to point some domain to the local IP address. And now we have also the Hair pinning issue. So my question is: What software can I use? Will bind do the job? I just need to insert some A entries into the zone. As easy as possible. We have local Linux/Ubuntu servers.

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

  • Red Hat server minimal install

    - by chmeee
    In a farm of virtualized Red Hat servers, there's the need to install a minimal system for security reasons. Minimal installs have serveral advantages (even no security related): Lees exposure to vulnerabilities (if you don't need it, don't install it) Better update process (less packages to update, less probability of breaking the system) Better performance (no unneeded daemons or processes) The less software you have the easier it is to harden the system Unfortunately, this is not easy because the "Minimal Installation" on Red Hat contains lots of unnecessary packages. There is an added challenge as the farm is running Oracle iAS. I've been told that iAS has dependencies with local graphical envieronment. So finally every server in the farm has gnome, X, etc. I've been searching the web and one solution seems to be making a kickstart script that will intall only the necessary packages. But I find this difficult and have several doubts about how to maintain the system dependencies afterwards. How do you install minimal Red Hat servers? Is it Ok to use kickstart or will I have dependency problems in the installation or in updates? Is there any way to avoid installing the graphical environment for iAS?

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • My VPS host (rosehosting) sold me a domain name, but I can't get it to work

    - by Faisal Vali
    My VPS host (rosehosting) sold me a domain name, but I can’t get it to work. They sent me an email with the following (almost a month ago) DNS Servers (unless you ordered your own DNS servers): ns1.rosehosting.com (216.114.78.148) ns2.rosehosting.com (216.114.78.155) Operating System: Ubuntu 9.04 Domain Name: mytestdomainfv.com Host Name: mytestdomainfv.com IP Address: .... Physical Host Name: Vs####.rosehosting.com When I type in the Physical Host Name or the IP from a remote computer I get connected to my VPS. But when i type 'mytestdomainfv.com' the name is never resolved, and it has been a month now. I thought that they would configure it so that it would, but it seems that they haven't. Does anyone know how I can get 'mytestdomainfv.com' to point to my VPS? I looked at some of the other similar questions, but they talk about forwarding GoDaddy domain names - so I'm not sure if it applies - but then again, it might just be my naivete. Any direction will be greatly appreciated. Thanks! p.s mytestdomainfv.com is not the real domain name

    Read the article

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

  • Can't log in using second domain controller when first DC is unreachable

    - by rbeier
    Hi, We're a small web development company. Our domain has two DCs: a main one (BEEHIVE, 192.168.3.20) in the datacenter and a second one (SPHERE2, 10.0.66.19) in the office. The office is connected to the datacenter via a VPN. We recently had a brief network outage in the office. During this outage, we weren't able to access the domain from our office machines. I had hoped that they would fail over to the DC in the office, but that didn't happen. So I'm trying to figure out why. I'm not an expert on Active Directory so maybe I'm missing something obvious. Both domain controllers are running a DNS server. Each office workstation is configured to use the datacenter DC as its primary DNS server, and the office DC as its secondary: DNS Servers . . . . . . . . . . . : 192.168.3.20 10.0.66.19 Both DNS servers are working, and both domain controllers are working (at least, I can connect to them both using AD Users + Computers). Here are the SRV records that point to the domain controllers (I've changed the domain name but I've left the rest alone): C:\nslookup Default Server: beehive.ourcorp.com Address: 192.168.3.20 set type=srv _ldap._tcp.ourcorp.com Server: beehive.ourcorp.com Address: 192.168.3.20 _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = beehive.ourcorp.com _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = sphere2.ourcorp.com beehive.ourcorp.com internet address = 192.168.3.20 sphere2.ourcorp.com internet address = 10.0.66.19 Does anyone have any ideas? Thanks, Richard

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Debian - Secure system from current administrator

    - by netadmin
    Hello, I am the Network and Systems Administrator in an organization of just under 500 users. We have a number of Windows Servers, and that is certainly my area of expertise. We also have a very small handful of Debian servers. We are about to terminate the sysadmin of these Debian systems. Short of powering down the systems, I would like to know how I can ensure that the previous admin does not have control of these systems in the future, at least until we hire a replacement linux sysadmin. I have physical/virtual-console access to each of the systems, so I can reboot them in various user-modes. I just don't know what to do. Please assume that I do not currently have root access to all of these systems (an oversight on my part that I now recognize.) I have some experience in Linux, and use it on my desktop on a daily basis, but I must admit that I am a competent user of linux, not a systems admin. I have no fear of the command line however.... Is there a list of steps that one should take to "secure" a system from somebody else? Again, I assure you that this is legit, I am re-taking control of my employer's systems, at the request of my employer. I hope to not have to shut the systems down permanently and still be reasonably certain that they are secure. Thanks for your time.

    Read the article

  • Ways to have audio output without wires

    - by viraptor
    I'm trying to find a way of using my home speakers/amp without actually having to connect them. There are two laptops that use them normally (so I don't like changing the connection all the time) and I'd rather move the speakers to a place that's away from the couch. I'm not sure how to do this though... The options I can think of are: some kind of wireless jack-jack connection finally getting a media server Unfortunately I can't find any good product for the first solution. I've seen some headphones which have the receiver integrated and a separate transmitted, so in general the idea is already out there, just not the way I need ;) I've seen also http://www.miccus.com/products/blubridge-mini-jack, but I'd have to have a compatible receiver which I can't find on its own (maybe there's some application that the media server could use?). As far as media server goes... many of the plug servers look really interesting, but I'm not sure how to create an audio output and how to redirect the input really. None of the plug servers I've seen so far advertises the option of audio output jack port. I think this part could be fixed by getting one with an usb port and a separate cheap usb soundcard. I hope that input can be sorted out in some rather simple way. I've got Linux running on both laptops so I hope that would be possible to configure jack/pulse/whatever to use the remote endpoint, or even write a simple local-/dev/dsp:network:media-server-/dev/dsp forwarder. So the main question is... are there better ways? Are there any out of the box solutions? Or maybe this was already done by someone and described somewhere?

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >