Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 352/471 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • A star vs internet routing pathfinding

    - by alan2here
    In many respects pathfinding algorythms like A star for finding the shortest route though graphs are similar to the pathfinding on the internet when routing trafic. However the pathfinding routers perform seem to have remarkable properties. As I understand it: It's very perfromant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, theres never any doubling back for example. IP addresses don't have to be geographicly nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router dosn't have to store a list of all the addresses each of it's directions leads most directly to. I can't find this information elsewhere however I don't know where to look or what search tearms to use. I'm looking for a basic, general, high level description to the algorithms workings, from the point of view of an individual router.

    Read the article

  • Firewall for internal networks

    - by Cylindric
    I have a virtualised infrastructure here, with separated networks (some physically, some just by VLAN) for iSCSI traffic, VMware management traffic, production traffic, etc. The recommendations are of course to not allow access from the LAN to the iSCSI network for example, for obvious security and performance reasons, and same between DMZ/LAN, etc. The problem I have is that in reality, some services do need access across the networks from time to time: System monitoring server needs to see the ESX hosts and the SAN for SNMP VSphere guest console access needs direct access to the ESX host the VM is running on VMware Converter wants access to the ESX host the VM will be created on The SAN email notification system wants access to our mail server Rather than wildly opening up the entire network, I'd like to place a firewall spanning these networks, so I can allow just the access required For example: SAN SMTP Server for email Management SAN for monitoring via SNMP Management ESX for monitoring via SNMP Target Server ESX for VMConverter Can someone recommend a free firewall that will allow this kind of thing without too much low-level tinkering of config files? I've used products such as IPcop before, and it seems to be possible to achieve this using that product if I re-purpose their ideas of "WAN", "WLAN" (the red/green/orange/blue interfaces), but was wondering if there were any other accepted products for this sort of thing. Thanks.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Creating an Apache Virtual Directory, but updating Active Directory DNS

    - by SnoConeGod
    Hello all, I'm just getting started with using the Zend Framework and am following a recommended procedure where I am supposed to create an Apache Virtual Directory for the public-facing portion of a new Zend project. I don't THINK I had any issues creating the Virtual Directory, but my knowledge of the required DNS changes is rather lacking. The dev server I'm using is on a Microsoft Windows Active Directory domain, so I've added A records for both the server name and the subdomain. Still, trying to browse to the site from a Windows 7 PC isn't working properly. What am I missing? What's the proper set of steps for getting an Apache-served subdomain to appear properly in a peer computer's web browser? Details below: server: Debian command-line only, freshly installed today with Zend Server CE LAMP stack server name: ZENDEV subdomain: SQUARE.ZENDEV AD Domain functional level: 2008 mixed (run by a mishmash of 03 and 08 servers) attempting to visit the sites: http://square.zendev and http://square.zendev.domain.local (name of domain redacted, but using the local (not com) suffix) Apache Virtual Directory added to httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/var/www/square/public" ServerName square.localhost </VirtualHost> Is this only a problem with DNS? Or with DNS and my Virtual Directory? Thanks! John

    Read the article

  • Why does notepad crash on desktop files in the save-as dialog?

    - by deepc
    Here's a puzzling problem - maybe somebody has an idea. Right now I am out of ideas. On Win7 64bit, the following crashes Notepad: On Desktop, right click, select "New | Text Document". This creates "New Text Document.txt". Right click on that file, select "Edit". This opens notepad with the empty file. Select "File | Save as": Notepad crashes and Win7 reports that "Notepad has stopped working". Now, move the file to c:\temp and repeat steps 2 and 3: no crash this time and the save-as dialog appears normally. I can create similar steps for the "open" dialog. Things I have tried: Safe mode - does not work, same problem Create a new user and try again logged in as that user - no crash Name file differently, or create elsewhere and then move to desktop - same problem Use Wordpad instead - same problem Review shell extensions with ShellExView - nothing extraordinary here Stare at the event log entries for each of the crashes. Does not enlighten me. At time of crash look at the process explorer stack view. Hangs at a function "TaskDialog". sfc.exe /scannow repaired some files but the problem persists. This is how the event log entries look like: Log Name: Application Source: Application Error Date: 14.12.2010 00:33:48 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Description: Faulting application name: NOTEPAD.EXE, version: 6.1.7600.16385, time stamp: 0x4a5bc9b3 Faulting module name: COMCTL32.dll, version: 6.10.7600.16661, time stamp: 0x4c6f6e4b Exception code: 0xc000041d Fault offset: 0x00000000000db770 Faulting process id: 0x198 Faulting application start time: 0x01cb9b1e140ab92a Faulting application path: C:\Windows\system32\NOTEPAD.EXE Faulting module path: C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_fa62ad231704eab7\COMCTL32.dll What else should I try, short of dumping my user and starting over with a new profile? Thanks...

    Read the article

  • How do i set up a fully featured small business network?

    - by JoshReedSchramm
    This has the possibility to be a very large question but I recently acquired a few rack mount servers and the hardware necessary to run them. Unfortunately I'm a programmer with very little understanding of how to set up a good working network so I'm hoping someone on here might be able to help. What I want to do is run a domain with a series of subdomains which would all be externally accessible. The setup would live inside my home and my internet connection is your run of the mill cable model (which means a dynamic IP) I want to be able to set up a couple site, specifically: www.mycompany.com (mycompany.com with no subdomain would redirect to this) build.mycompany.com (for my continuous integration server) ruby.mycompany.com (for ruby projects) win.mycompany.com (for windows project) etc. Additionally this is still my home network so our personal machines need to be able to get on via wifi with at least the same security we have now through an out of the box router from best buy. I'm thinking i need a DNS server, DHCP server and one of those would run either no-ip or dyndns to accommodate the dynamic ip. I don't necessarily need mail but it might be helpful to have some sort of mail server i could use for testing, it doesn't need to get out to the greater internet though. So how do i set up this kinda of network? tl;dr Need to know how to set up your standard office style network in my home off my normal consumer level cable modem connection.

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • Change Windows Authentication user for Sql Server Management Studio

    - by Asmor
    We're using Sql Server 2005 with Windows Authentication setup. So normally, when you log in using e.g. Sql Server Management Studio, it forces you to log in at MACHINE_NAME\Username. Anyways, on this one particular computer, the person said they had to make a new account called User01 to do something and showed me where she'd created it under security in the "master" system database. And so now when she logs in, it's listed as MACHINE_NAME\User01 (not the actual Windows user name). It's still set to Windows Authentication, though, and I'm unable to change the login name. Now here's where the real problem comes in... I didn't realize that she was being logged in under this user name at the time, and I disabled it to see what would happen. Now I can't log into the server under her account. I created a new account in Windows called test, and as expected SSMS had the username as MACHINE_NAME\test, and I was able to log in fine. However, the area where the User01 account was listed is not visible to me as far as I can tell and so I can't reenable it. I also tried running the following query: alter login User01 ENABLE And got this error: Msg 15151, Level 16, State 1, Line 1 Cannot alter the login 'User01', because it does not exist or you do not have permission. So in a nutshell, ideally I'd like to reenable User01 somehow, just to get things back to where they used to be. Failing that, how can I force SSMS to log in using the Windows account name as it should be, rather than trying to use User01?

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • MGE UPS Cut power - What happened?

    - by JT.WK
    I have 3 x MGE Pulsar M 3000 2700w UPS units within my server room which have run perfectly up until now. On Saturday morning I noticed that one of these UPS units was no longer outputting power, the lcd displayed a message saying "load not powered" and told me to press the power button to start output. Needless to say that the servers, switches and routers is was supporting were all turned off. I tried pressing and even holding the power button, but the unit refused to start back up again. Only power cycling the unit got it back up again. I have checked the logs on the UPS, although they were useless. Nothing out of the ordinary, and no email notifications had been sent. The output level sits on about 51% and all battery checks are OK. It is now three days on and the UPS is still up and running (although I am scheduling an outage to get it out of there ASAP). Does anyone have any idea what could have gone wrong here? Is there anything else that I can check that could help?

    Read the article

  • Bluescreen Stop 0x00000027 RDR_FILE_SYSTEM after cloning system on new HDD

    - by Daniel
    A couple of months ago I got a new 500GB HDD for my no-name-brand Laptop PC and I cloned the complete Win 7 Pro 32bit system with clonezilla from the old 70GB drive to the new one. At first everything was great, the new driver was immediately updated. But since then I get on a more and more frequent level (used to be every 2-3 days, but now it's more like 2-3 times a day) a BSOD Stop error. From the eventlog in Windows I know that there are two different error codes sppoking aroung: 0x00000027 (0xbaad0073, 0x9954f80c, 0x9954f3f0, 0x8ecd7c82) RDR_FILE_SYSTEM 0x00000044 (0x85443230, 0x00000eae, 0x00000000, 0x00000000) MULTIPLE_IRP_COMPLETE_REQUESTS I checked for viruses and did a complete HDD check using the Windows tool and WesternDigital tool (which is the producer of the new HDD) without results. I also looked for driver updates but couldn't find any. The name of the HDD as shown in the device manager is: WDC WD5000BPVT-00HXZT1 ATA Device. I'm really a noob regarding those kind of problems, so if you have any idea what I can try without losing all my data, let me know. Also, if any additional information are required.

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Not able to find scripts present in /etc/profile.d directory [on hold]

    - by priya
    I am using Red Hat Linux 6.0 ... using davinchi board. I have to change system clock resolution so I am changing (HZ) env var. For this I have written script so that I can change HZ = 1000 n insert that script in /etc/profile.d and write code for loop in /etc/profile so that while running as usual /etc/profile can load the scripts present in /etc/profile.d. But when I am logging into the system at root level then showing error as "-bash: ./etc/profile.d/resolution.sh(my script name): No such file or directory Also here why it is showing ./etc and not /etc . Is something related to that?? Also I tried to add script in /etc/init.d but still no change in value of HZ takes place. Please tell where to change so that this env var can get changed. The script(resolution.sh) written has :- #!/bin/bash export HZ=1000 The content of /etc/profile which I entered is: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then .$i fi done unset i fi And the output of grep command is -rw-r--r-- 1 root root 535 Feb 4 2004 profile -rwxr-xr-x 2 root root 4096 Feb 2 2004 profile.d

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Using Varnish (only) for DDoS mitigation

    - by Martin Kanters
    My VPS is suffering from a (D)DoS doing a SYN flood with spoofed IPs. I'm right now searching from ways how to be able to defend (at least a bit) against it. It's running a DirectAdmin apache2 webserver. Mainly used for serving PHP and MySQL. We are using CloudFlare, which are saying that they are able to mitigate (D)DoS at some level, now the attacker knows our real IP address, so CloudFlare isn't helping a bit. I've done some searching on the net and found out about enabling SYN cookies, to defend against it. I've checked my settings and it seems it was enabled all along. I've also read about that Varnish is able to defend against SYN flooding and Slowloris attacks, now I'm pretty interested in using that. The thing is that CloudFlare is already caching a lot from us, and I don't wish to spend too much resources on Varnish. Is it possible and smart to set up Varnish only for the better handling of requests? Are there perhaps better ways which I've missed? Thanks in advance, Martin

    Read the article

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • 403.4 won't redirect in IE7

    - by Jeremy Morgan
    I have a secured folder that requires SSL. I have set it up in IIS(6) to require SSL. We don't want the visitors to be greeted with the "must be secure connection" error, so I have modified the 403.4 error page to contain the following: function redirectToHttps() { var httpURL = window.location.hostname+window.location.pathname; var httpsURL = "https://" + httpURL ; window.location = httpsURL ; } redirectToHttps(); And this solution works great for every browser, but IE7. On any other browser, if you type in http://www.mysite.com/securedfolder it will automatically redirect you to https://www.mysite.com/securedfolder with no message or anything (the intended action). But in Internet Explorer 7 ONLY it will bring up a page that says The website declined to show this webpage Most Likely Causes: This website requires you to log in This is something we don't want of course. I have verified that javascript is enabled, and the security settings have no effect, even when I set them to the lowest level I get the same error. I'm wondering, has anyone else seen this before?

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • Wireless to Wireless Transfer Slow on a Linksys WRT54GL

    - by Kyle Brandt
    The Situation: When I try to transfer a file from one computer to another that are both connected via wireless on a WRT54GL (in a office) with dd-wrt firmware I often get bad speeds. In generally they average around 100 kilobytes a second. Either computer can download via wireless from the Internet at at about 2 megabytes a second. The speed is slow with the transfer of one large file. There are about 20 other wireless networks that the computers can see, so there is a lot of noise, but I don't have the equipment to really monitor the frequencies well. But that still seems pretty slow. I thought maybe it was the transmit on each card, but even when they are 5 feet away with a line of sight I still get these speeds. According to Linux both cards are operating at 54g. My Questions: Is this normal for this sort of consumer level wireless equipment? Anything I can do to improve it? why is wireless to wireless transfer slow when everything else isn't? Whats steps might I take to figure out what is happening? For example, are lots of packets not making to the access point requiring retransmissions? Above all, I want to find out what the problem actually is. This may seem odd, but at this point I am more interested in understanding what the problem is than fixing it. What I have tried: I have tried messing with lots of settings. Different channels, xmit power, G-Only, none of which has made anything any better. I've also tried upgrading to newer dd-wrt firmware version and doing a reset to wipe out the settings.

    Read the article

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • Use Apache authentication + authorization to control access to Subversion subdirectories

    - by Stefan Lasiewski
    I have a single SVN repo at /var/svn/ with a few subdirectories. Staff must be able to access the top-level directory and all subdirectories within it, but I want to restrict access to subdirectories using alternate htpasswd files. This works for our Staff. <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthBasicProvider ldap # mod_authnz_ldap AuthzLDAPAuthoritative off AuthLDAPURL "ldap.example.org:636/ou=people,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org?uid?sub?(objectClass=PosixAccount)" AuthLDAPGroupAttribute memberUid AuthLDAPGroupAttributeIsDN off Require ldap-group cn=staff,ou=PosixGroup,ou=Unit,ou=Host,o=ldapsvc,dc=example,dc=org </Location> Now, I am trying to restrict access to a subdirectory with a separate htpasswd file, like this: <Location /customerA> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd.customerA Require user customerA </Location> I can use Firefox and curl to browse to this folder fine: curl https://svn.example.org/customerA/ --user customerA:password But I cannot use check out this SVN repository: $ svn co https://svn.example.org/customerA/ svn: Repository moved permanently to 'https://svn.example.org/customerA/'; please relocate And on the server logs, I get this strange error: # httpd-access.log 192.168.19.13 - - [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 401 401 192.168.19.13 - customerA [03/May/2010:16:40:00 -0700] "OPTIONS /customerA HTTP/1.1" 301 244 # httpd-error.log [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Could not fetch resource information. [301, #0] [Mon May 03 16:40:00 2010] [error] [client 192.168.19.13] Requests for a collection must have a trailing slash on the URI. [301, #0] My question: Can I restrict access to Subversion subdirectories using Apache access controls? DocumentRoot is commented out, so it's not clear that the FAQ at http://subversion.apache.org/faq.html#http-301-error applies.

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >