Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 212/500 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Can you create ACLs with open vSwitch on XenServer 5.6FP1 without using the DVS appliance?

    - by bwizzy
    I have a pool of XenServer hosts running the Free version of XenServer 5.6 FP1. I was wondering if I change the network backend to use Open vSwitch if I can specify ACLs on individual network VIFs without needing to use the DVS appliance (distributed virtual switch) which requires an Advanced License or higher. Basically I'm looking for a way to isolate VMs on my network so that if a user had root access on the command line they couldn't access other servers they should not be able to (without using a VLAN).

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • Is there any way to set up a malware-blocking transparent proxy on an Airport Extreme?

    - by Chris R
    I'd like to add some kind of easily-administered transparent HTTP proxy to my home network. Ideally, it would allow me to, for example, redirect web requests to blacklisted servers into nothing, block certain kinds of content, et al. My home network at the moment consists of a mac mini media server that could -- if the load wasn't huge -- fill this role as well, an Airport Extreme, and a mac laptop that is my main machine. I'm reasonably technically savvy, so don't spare the complicated answers.

    Read the article

  • Ubuntu server; Backup of server and MySql database, and Solr database

    - by Camran
    How is backup done on ubuntu servers? I have a server (Ubuntu 9.10) which has apache2 installed, php5, mysql etc... The website is a classifieds website where all classifieds are stored in mysql and Solr. I need to backup this server with all information to be able to fully restore it if something goes wrong. How should I start? Is it an automated task, or will I do backups manually? (prefer manually) Thanks

    Read the article

  • bigbluebutton or openmeetings?

    - by Adam Monsen
    I want to set up a server for small group meetings. I'm looking for features like audio conferencing, multi-point video, screen sharing... stuff like that. I'm familiar/comfortable administering Ubuntu servers, so for this task I'd likely fire up a small EC2 server running Ubuntu. I'm most interested in using FLOSS. I see there are at least a couple of options out there. For example: bigbluebutton and openmeetings. Anyone installed either (or a different one) and have recommendations/tips? If yes, have you ever upgraded same? I notice bigbluebutton has deb packages, so that might be pretty straightforward. openmeetings appears to support logging in with a facebook account; that might be a good way to avoid having to manage logins.

    Read the article

  • Automatically restarting Perfmon Data Collector Sets

    - by Scott Herbert
    I have a number of User Defined Data Collector Sets running on a Windows 2008R2 server, collecting perfmon stats from various servers. Whenever there is a network interruption, or a server rebooted (at worst, when the server which is running all these DCSs is rebooted), I have to manually restart some or all of the Data Collector Sets. Is there a way to configure them to automatically restart, or otherwise be more resilient?

    Read the article

  • SSH: Configure ssh_config to use specific key file for a specific server fingerprint

    - by Penthi
    I have a key based login for a server. The IP and DNS of the server can change, because it is hosted on Amazon. Is there a way to configure the ssh client config to use the specific key file for this server only, when the fingerprint of the server matches? In other words: Normaly servers are matched by IP or DNS in the ssh client config. I want to do this by fingerprint, becaus IP and DNS can change.

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Will running CSF & Bastille cause any conflicts?

    - by MatW
    I'm taking my first steps into the world of un-managed servers, and have confused myself whilst reading through the 101 tutorials on server hardening that Google spews out! The most recent advice I have been given is to install both CSF and Bastille on my server (used to serve a consumer-facing ecommerce site and act as the business' email server), but my understanding was that both of these tools were an abstraction layer above netfilter / iptables. Will installing both packages cause any conflicts, or do they play well together?

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • Testing the load factor in my lab

    - by Ami Winter
    I am a system admin in a lab, I have ~90 computers in the lab and I want to check the load factor on them.. meaning, to check how many people are working on the computers hourly.. To see if I need to buy more computers or not. I am looking for a way to build a script to check if a computer is logged on or not.. (0 for log off - 1 for log on) After I will have this data, I know how to build a script to build me the graphs. All the computers are linked via a domain and most of them have windows XP (few windows 7) I'll be happy to get some help. Amihay

    Read the article

  • Deleting Temporary Internet Files through Group Policy

    - by Kami
    I have a domain controller running on Windows 2008 Server R2 and users login to application servers on which Windows 2003 Server SP2 is installed. I have applied a Group Policy to clean temporary internet files on exit i.e to delete all temporary internet files when users close the browser. But the group policy doesn't seem to work as user profile size keeps on increasing and the major space is occupied by temporary internet files therefore increasing the disk usage. How can i enforce automatic deletion of temporary internet files?

    Read the article

  • Is it possible to rsync your web site to another backup server and use the same .htaccess files?

    - by stephenmm
    I am trying to use rsync to replicate all the files from one web server to another server that could act as a backup if the first one went down. The problem I am having is that the .htaccess file requires the AuthUserFile to have the fully quallified path to the .htpasswd file and I cannot make the paths the same on the two machines. Does anyone know how I might use the same .htaccess file on two different servers? Thanks for any help that can be provided.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • How to perform "tracert" like operation from browser?

    - by user329777
    I'm remotely investigating issue with my friend's WIN XP computer. The browsers (Chrome and IE) won't show any pages but message with err_connection_reset in details. I've tried to perform "telnet google.com 80" and send there "HEAD / HTTP/1.0" from console and everything went fine, but browsers refused to give me anything. I've checked for presence of firewalls and proxy servers in the settings, but nothing seems to be there. What else to check?

    Read the article

  • Firewall GPO not applying despite being enumerated by gpresult

    - by jshin47
    I have a need to open up the admin$ share on all of my domain's client PC's and I am trying to do so using group policy. I defined computer policy for Windows Firewall with Advanced Security in a policy object linked to the appropriate container and added the appropriate rules. However, they are not being applied! I feel like I have tried all of the obvious steps: I've checked gpresult and the resulting set of policy is the way that I would expect it to look. I've gpupdate /force and gpupdate /sync on a few client computers, but no matter what I do they don't seem to respond to my changes. I know that other computer policies in the GPO are being applied so it is strange that these are not. I have also disabled exceptions on clients in the firewall GPO, but that doesn't seem to be applying either. Here is a screenshot of the firewall.cpl from a client: Basically, although other options in the same GPO ARE applied for computer policy, the firewall settings seem to be ignored.

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Internet Explorer defaults to 64-bit version

    - by Tim Long
    My IE8 has suddenly started defaulting to the 64-bit version. I have no idea how or why this has happened, but I suspect it might be linked to the Browser Choice Screen that Microsoft was recently forced to display by EU law. However, many web sites will not display correctly in IE8 x64 (eg. sites that use Adobe Flash or Microsoft Silverlight). I have the 32-bit version of IE pinned to my taskbar and if I launch it manually, everything is fine. But when I click on a URL from another program and IE is not already running, then the 64-bit version gets launched. This really messes with programs like BBC iPlayer which rely heavily on Adbobe Air and Flash. So, how do I get IE8 32-bit version to be the default version again? I've tried using the "default programs" control panel and that doesn;t make any difference (in fact, it doesn't give the choice between x84 and x64 versions, it just lists "internet explorer").

    Read the article

  • Free or inexpensive compression proxy

    - by Maksee
    Hi, I'm looking for a way to minimize the net traffic use with my netbook mobile internet connection. Recently I managed to install Opera Mini on the XP and the opera approach of compressing the data helped a lot. But I would like to do the same with my favorite browser using http proxy that compress the data "on the fly". But searching for "compression proxy servers" I could not find any working host/port links. Is it a brand-new technology and therefore expensive or rarely available?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >