Search Results

Search found 19878 results on 796 pages for 'multiple dispatch'.

Page 644/796 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • Time Drift on VM servers, need a reliable solution

    - by zeroasterisk
    We have some windows server 2008 VMware instances on multiple physical servers (hosted) and an application which requires the time to be synced across the server instances. Obviously, VMware has problems with this and we really have never gotten it working any better, we have setup the servers to poll for an NTP update every minute which mitigates the problem (in a fairly crude way). Except that every once in a while, the update will fail (because there's already too much drift) and then windows never does an NTP update afterwards which eventually allows the servers to drift far enough apart that our application breaks, and we notice. We are thinking about changing hosts to Xen servers on approximately the same setup, and I anticipate similar problems. can anyone tell me if Xen has the same time-drift issues VMware does, for guests? can anyone tell me what the best windows server settings are for syncing with an external NTP server to keep things in sync: how frequently do you recommend syncing? (assuming every minute) do you recommend running our own NTP server - even if it has to be on a virtual instance? (assuming not) is there any way to tell windows to sync with the NTP server no matter what the time difference is? any other suggestions for keeping windows servers time in sync? I have become familiar with [ http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 ] and it's helped, but it's not been totally effective (see above). thanks much!

    Read the article

  • OpenVZ vs KVM for Linux VMs

    - by Eliasdx
    Hardware: Intel® Core™ i7-920, 12 GB DDR3 RAM, 2 x 1500 GB SATA-II HDD (no SoftRaid because Proxmox developers don't support softraid and they are sure you'll run into problems) Software: Proxmox VE with KVM and OpenVZ support and debian everywhere I want to run multiple Linux VMs on this server. One for a firewall (I want to try pfSense), one for MySQL, one VM for nginx (my stuff) and ~2 VMs with nginx for other people's web sites. I don't think that pfSense will run in an OpenVZ environment but it should run in KVM. The question is if I should setup the other VMs using KVM or OpenVZ. In OpenVZ they should have less overhead for the OS itself but I don't know about the performance. I heard that KVM is more stable but needs more RAM and CPU. I found this diagram showing a OpenVZ setup on the same hardware I'm using. This guy uses an own VM for each and every website which is running on his server. I can't think of any advantage why he's using so many VMs.

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • VMware ESXi 4.1 snapshot of server 2008R2 machine generates 2 indentical snapshots

    - by Peter
    I have 2 VMs that are failing to get veeam backups, and it appears that the culprit is vmware snapshots. We are running Vsphere ESXi 4.1 build 320092, we have multiple server 2008R2 machines that take snapshots fine, but when with these two VMs when I take a snapshot I get 2 identical snapshots a few seconds apart. The snapshot manager only shows 1 snapshot, but there are 2 files 1 number off, that are identical sizes. There is only one disk on each VM, so that isn't the problem. Has anyone seen this behavior before and know how to fix it? Here are the files after a bad snapshot VM-XXX-000001-ctk.vmdk VM-XXX-000001-delta.vmdk VM-XXX-000001.vmdk VM-XXX-000002-ctk.vmdk VM-XXX-000002-delta.vmdk VM-XXX-000002.vmdk VM-XXX-2a659dbf.hlog VM-XXX-2a659dbf.vswp VM-XXX-Snapshot286.vmsn VM-XXX-aux.xml VM-XXX-ctk.vmdk VM-XXX-flat.vmdk VM-XXX-vss_manifests286.zip VM-XXX.nvram VM-XXX.vmdk VM-XXX.vmsd VM-XXX.vmx VM-XXX.vmxf vmware-20.log vmware-21.log vmware-22.log vmware-23.log vmware-24.log vmware-25.log vmware.log VM-XXX-000001.vmdk and VM-XXX-000002.vmdk are the exact same size.

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • add_header directives in location overwriting add_header directives in server

    - by user64204
    Using nginx 1.2.1 I am able to add multiple headers using add_header as follows: server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name1: Value1 Name2: Value2 However I soon as I use the add_header directive inside location, the other add_header directives under server are ignored server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { add_header Name3 Value3; <=== HERE add_header Name4 Value4; <=== HERE echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name3: Value3 Name4: Value4 The documentation says that both server and location are valid context and doesn't state that using add_header in one prevents using it in the other. Q1: Do you know if this is a bug or the intended behaviour and why? Q2: Do you see other options to get this fixed than using the HttpHeadersMoreModule module?

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Does IIS Sometimes Allocate More Worker Processes Than Configured?

    - by Paul Williams
    We have an IIS 7.5 web service on Windows Server 2008 that handles WCF requests from C# clients. This service is configured to have Maximum Worker Processes = 1, so it is not a web garden. IIS is setup to recycle itself at the same time every day (3 AM). I am trying to debug gnarly connection issues, so I wanted to be sure the application pool was not recycling itself. I configured the pool to log an event when it recycles itself. To my surprise, I see the following entries in the System event log: Level: Information Date/Time: 3/23/2012 3:00:00 AM - Source: WAS - Event ID: 5076 A worker process with process id of '6636' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. Level: Information Date/Time: 3/23/2012 2:59:39 AM - Source: WAS - Event ID: 5076 A worker process with process id of '9364' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. IIS is correctly recycling the application pool at 3 AM. However, I do not understand why I would be getting two recycle events in the log within a few seconds of each other. The maximum number of processes is 1. Does IIS sometimes allocate multiple processes for an application pool that is specified as having 1 process? -- edit -- I connected at about 4 PM today and only saw 1 w3wp.exe process. There are no other event log entries that would indicate a crash.

    Read the article

  • Internet Explorer will not open

    - by KCotreau
    I recently migrated a company to a Microsoft domain environment, logged the users in under their new domain accounts, and then copied the old profiles to the new profile. I am not sure if that is related since they did not complain about it right away and it may have been a subsequent patch or something, but I have two XP computers that will not open IE8. You click on it, and it nothing graphically happens at all, but you can see a process in task manager. If you click many times, you get multiple instances. It will appear often TWICE per click. It still works in the old profile, so it is specific to the profile, and I would like to fix it rather than blow it away. Here is what I have done without success: Tried opening without add-ons (the one in System Tools) Reinstalled IE8 Ran SFC /SCANNOW I found a script that was supposed to repair any registry entries, and ran it. I tried exporting the whole HKCU\Software\Microsoft\Internet Explorer key and deleting it, hoping that when I restarted it, it would recreate it...No joy. I restored it. Any ideas?

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • apache2 server running ruby on rails application has go daddy cert that works in chrome/firefox and ie 9 but not ie 8

    - by ryan
    I have a rails application up on a linode ubuntu 11 server, running apache2. I have a cert purchased from godaddy, (where we also bought our domain) and the cert is installed on my server. Part of my virtual host file: ServerName my_site.com ServerAlias www.my_site.com SSLEngine On SSLCertificateFile /path/my_site.com.crt SSLCertificateKeyFile /path/my_site.com.key SSLCertificateChainFile /path/gd_bundle.crt The cert works fine in Chrome, FireFox and IE 9+ but in IE 8- I get this error: There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. I'm hosting multiple rails apps on this same server (4 right now plus some old php sites that don't need ssl). I have tried googling every possible combination of the error/situation that I could think of but at this point I'm shooting in the dark. The closest I could come up with is that some versions if IE don't support SNI. But that doesn't apply here because I am getting the warning on windows 7 machines running IE 8, and the SNI only seemed to apply to IE 8 if the operating system was windows XP. So why is this cert being accepted by all browsers but giving me a warning in IE 8? Edit: So doing a little more digging and I figured out some more. It turns out this is effecting IE 9 as well. However the problem seems to be that IE is not traversing the ssl chain to get to the right cert. FireFox and Chrome when I go to view certificate show the correct one, but IE is showing one of our other sites certificates. REAL QUESTION HERE: That being the case why is IE not getting the right certificate when others are and how do I fix it?

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • Additional Security Measures for Syslog over SSH

    - by Eric
    I'm currently working on setting up some secure syslog connections between a few Fedora servers. This is my currently setup 192.168.56.110 (syslog-server) <---- 192.168.57.110 (syslog-agent) From the agent, I am running this command: ssh -fnNTx -L 1514:127.0.0.1:514 [email protected] This works just fine. I have rsyslog on the syslog-agent pointing to @@127.0.0.1:1514 and it forwards everything to the server correctly on port 514 via the tunnel. My issue is, I want to be able to lock this down. I am going to use ssh keys so this is automated because there will be multiple agents talking to the server. Here are my concerns. Someone getting on the syslog-agent and logging into the server directly. I have taken care of this by ensuring that syslog_user has a shell of /sbin/nologin so that user can't get a shell at all. I don't want someone to be able to tunnel another port over ssh. Ex. - 6666:127.0.0.1:21. I know my first line of defense against this is to just not have anything listening on those ports and it's not an issue. However I want to be able to lock this down somehow. Are there any sshd_config settings on the server that I can use to make it where only port 514 can be tunneled over ssh? Are there any other major security concerns I'm overlooking at this point? Thanks in advance for your help/comments.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >