Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 644/842 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • Per client DNS server assignment using Pfsense

    - by Trix
    I have a network where pfsense is the gateway. There are two sets of clients that I want. One where there will be some restrictions to the network (example, IM being blocked) and one network where there are no restrictions. One easy way I thought about doing this was assigning the different domains different DNS servers. One set could use OpenDNS, the other could use Google's Public DNS. The set with OpenDNS would have the filter options on (using OpenDNS' dashboard, I can check block IM .... so I do not manually need to block login.oscar.aol.com, meebo.com, gmail chat ....etc). So the problem is the DHCP server looks like it will only assign a single set of DNS servers to clients. Is there a way to set a per client assignment? Is there a better way to obtain what I want to obtain. This is just a small home network. I do not need anything fancy, but I do need this functionality in one way or another.

    Read the article

  • VLAN support on Juniper EX - 2200 switches

    - by liv2hak
    I have 6 Juniper switches (EX - 2200) connected to each other as shown in the network topology below. I have two PC's that I am using PC1 - (used for configuring the 6 switches via minicom) PC2 - to monitor the traffic between the switches via the Ports that are marked with arrows in the diagram. STEP 1: I create a new vlan On Switch 3 (SW3) that includes Port 12 and Port 22. I also assign l3-interface to the vlan (vlan_2) with ip address - 192.168.1.7. Now I plug-in Port 0 of Switch 3 on PC2. Now I try pinging 192.168.1.7 from PC2 (192.168.1.10) I want to know what will happen? My postulation is that I will not be able to ping SW3 from PC2.This is because SW3 (Port 12 and Port 22) is a part of a vlan_2 and vlan_2 logically breaks up broadcast domains and so 192.168.1.7 will not be reachable from 192.168.1.10. Now I have an l3-interface on SW1 with IP 192.168.1.1 using default vlan( vlan-id 0). Similarly I have enabled IP on SW2 - 192.168.1.2 SW3 - 192.168.1.3 SW4 - 192.168.1.4 SW5 - 192.168.1.5 SW6 - 192.168.1.6 all using default vlan. Now I plug in Port 12 of SW3 (blue cable) into the PC2. I try to ping 192.168.1.1 from PC2. What will happen at this stage.? My postulation is that I will be able to ping switch 1.Is this correct? Also another question is that can a single port on a Switch be added multiple VLANS? I am a beginner at network configuration? Any help would be highly appreciated. (Please ignore the CISCO symbol on the switches in the diagram.All swithes are Junper EX 22-00.)

    Read the article

  • How to repair multiple KDC an Netlogon errors

    - by Keith Sirmons
    Howdy, I have several erros in the system event log of my single Windows 2003 SP2 domain controller. Multiple member computers on the domain are listed in these errors. I am seeing two similar errors for each computer one second apart in the event log. Event ID 7 Source KDC The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field. The account name was [email protected] and lookup type 0x8. followed by Event ID 7 Source KDC The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field. The account name was MEMBERNAME$ and lookup type 0x8. The Lookup Types are also different, I have 0x8, 0x28, 0x0, 0x20. I am also receiving other authentication errors in the same time frame as all of the KDC errors Event ID 5722 Source NETLOGON The session setup from the computer MEMBERNAME failed to authenticate. The name(s) of the account(s) referenced in the security database is MEMBERNAME$. The following error occurred: Access is denied. I have run dcdiag /v to see if there was something wrong with Active Directory, but all tests passed. I also ran netdiag /v and it appers all of those tests ran. Any ideas on where to start for this issue? Thank you, Keith

    Read the article

  • qemu-img: Could not open $FILE

    - by HTTP500
    I received a single-file VMDK from a vendor that has a virtual appliance for a particular product I'm interested in evaluating. We run a KVM solution (Proxmox) so I tried converting the file but on that system qemu-img blows up. (I was able to convert (multipart) VMDK files from bitnami without error.) So I figured I'll just yum install qemu-img on a RHEL 6.3 VM and do it there. But despite the fact that I can file the file just fine when I run qemu-img on it I get this error that it can't open the file: [root@host dir]# file 1.vmdk 1.vmdk: VMware4 disk image [root@host dir]# qemu-img info 1.vmdk qemu-img: Could not open 'vmdk' I've seen some other people post on the interwebs that they've had this problem but none of them seem to have a resolution. Does anyone have any ideas? I have checked the MD5SUM already. EDIT1: [root@host dir]# qemu-img info -f vmdk 1.vmdk qemu-img: Could not open '1.vmdk' EDIT2: Ran strace per suggestion. Not sure what to look for... Here is a possible: ioctl(3, CDROM_DRIVE_STATUS, 0x7fffffff) = -1 ENOTTY (Inappropriate ioctl for device)

    Read the article

  • How to configure Apache and Tomcat with vhosts?

    - by Umar Farooq Khawaja
    I have a server with a static, public IP address. I also have a registered domain name. For the sake of illustration, let's suppose they are IP Address: 12.34.56.78 Domain Name: example.com I have a single machine on which I am running the following: A website (over IIS7) available locally at localhost:80 A JetBrains TeamCity instance (over Tomcat) available locally at localhost:1234 A VisualSVN Server instance (over Apache) available locally at localhost:5678/svn I have set up an A record for example.com and the following CNAME records: www.example.com builds.example.com sources.example.com I would like to configure Tomcat and Apache such that: if I point my browser at builds.example.com, I end up at the JetBrains TeamCity instance and, if I point my browser at sources.example.com, I end up at the VisualSVN Server instance. I thought I could configure the Apache to vhost example.com:5678/svn to point to sources.example.com and added the following lines to the Apache httpd.conf file Listen 5678 NameVirtualHost *:5678 <VistualHost *:5678> ServerName sources.example.com DocumentRoot /svn </virtualHost> That broke the VisualSVN instance, so I had to revert that to Listen 5678 Help!

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • How to scope access to a service to set of users, using OpenLDAP, and only OUs

    - by JDS
    Okay, here goes. Solving this will solve several problems for me (as I can reapply this knowledge to several extant, similar problems), but luckily I have a very specific, concise problem to describe. Enough preamble. Our hosting partner is setting up VPN access for us and is connecting it to our LDAP server. They are using Cisco VPN, the docs on setting this up are here: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808c3c45.shtml#maintask1 Specifically, note the screenshot in (5), under "ASDM" Now, I do NOT want to provide access to all of our users. I only want to provide access to our IT group. But I do not see a configuration option for LDAP groups on that web reference for the Cisco VPN. We are using: OpenLDAP 2.4 Static groups (i.e. "Group has the following members...") Single user OU, "ou=users,dc=mycompany,dc=com" Is it possible to provide an alias of some kind in OpenLDAP that creates another OU, "itusers", say, and lets me alias the members of that OU somehow? Something like: "cn=Jeff Silverman,ou=itusers,dc=mycompany,dc=com" is an alias for "cn=Jeff Silverman,ou=users,dc=mycompany,dc=com" And is NOT a separate, unique user account. Alternatively, should I just create a separate OU and manage it separately? It is a pain, but only 12-15 users will have to be managed that way, with two separate user accounts. But I hate this option - messy, unmanageable, unscalable. You know what I mean. I am open to any options. I've searched and read all over but I can't quite find an directly analagous example. I can't possibly be the only one who's had this problem! Thanks!

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • Multiple contacts with shared information

    - by Keith Thompson
    Background: I currently have several hundred contacts, synchronized between a Microsoft Exchange server and several mobile devices. I also save exported copies of the contacts in .vcf format. Is there a good way (application, file format, whatever) to maintain contacts with shared information? A very common scenario is that I have contacts for two or more people who live in the same house, for example: John Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-2222 Mobile: 555-555-3333 E-mail: [email protected] Jane Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-4444 Mobile: 555-555-5555 E-mail: [email protected] As you can see, both contacts have the same home address and phone number, but distinct names and work and mobile phone numbers. (Other information might also be either shared or distinct.) The applications and file formats I'm familiar with don't seem to have a good way to deal with this. If I use a single "John & Jane Doe" contact for both, it's difficult to distinguish the distinct information (if I want to call Jane's mobile phone rather than John's). If I use a separate contact for each, I have to remember to update both of them (or all of them for N 2) when they move or change their home phone number. An ideal solution would let me create a record containing information for their household, and have each of their contact records contain a reference to the household record, so that when I view John's contact record I see both shared and distinct information. Is there anything out there that has good support this kind of thing? (I would think there would be, since it's a very common scenario.) (I suppose I could roll my own system that generates merged .vcf files from some extended format, but that wouldn't play well with synchronizing across multiple devices.)

    Read the article

  • Browsing Audiobooks on an iPod Nano

    - by Electrons_Ahoy
    The sitation: I've got a stack of audiobooks in MP3 format in my iTunes library, and both an iPhone and an iPod Nano. After this question, I've changed the Media Kind for the audiobook MP3s from Music to Audiobook. This has been, overall, spectacular, as now I can resume where I was, they show up under Audiobooks, etc. On the iPhone, it's also super convenient, since the interface shows what used to be an album with multiple songs as a single book with multiple chapters, and going into "Audiobooks" presents me with a list of books, not tracks. The Nano, on the other hand, is a little strange. After changing the media type and re-syncing the ipod, the files in question are now listed under Audiobooks rather than Music, and the extra Audiobook features are present (resume playback and so on,) but the Audiobooks menu just lists all the MP3 tracks on the iPod in alphabetical order, ignoring whichever book/album they belong to - and doesn't seem to let me browse them any other way. This is, clearly, a little sub-optimal. Did I screw something up? How do I get the Nano to treat the files in a similar way to the way the iPhone and iTunes does - as books with chapters? Is there a step I missed somewhere? Do I need to reformat the iPod? Is this even possible? (Footnote: shameless bump, since this just scored me a tumbleweed.)

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • Asus P5Q doesn't post at all

    - by Sebi
    I had a very strange incident this evening. I was watching TV on the computer and suddenly the sound stutter. Some seconds later, the screen went black. First I thought ok some error, don't worry. Since the computer didn't react on any input from the mouse or keyboard, I shutted it down by pressing the power button. I tried to reboot but the monitor rest black. All the fans were running, the MB didn't made a beep and the leds are all green as they have to be. The HDD is running too, also the monitor is on. But nothing happens, not even a single message on the screen, just black. Here are the things I already tried with no success: tried to boot with 1 of two rams (each tested alone) turned off all power (pull th plug) for 10 minutes tested the graphic card with another pc. there it worked. tested the RAM with another pc. there it worked. Any hints what to do next?

    Read the article

  • How come the ls command prints in multiple columns on tty but only one column everywhere else?

    - by David Lou
    Even after using Unix-like OSes for a couple years, this behaviour still baffles me. When I use the ls command in a directory that has lots of files, the output is usually nicely formatted into multiple columns. Here's an example: $ ls a.txt C.txt f.txt H.txt k.txt M.txt p.txt R.txt u.txt W.txt z.txt A.txt d.txt F.txt i.txt K.txt n.txt P.txt s.txt U.txt x.txt Z.txt b.txt D.txt g.txt I.txt l.txt N.txt q.txt S.txt v.txt X.txt B.txt e.txt G.txt j.txt L.txt o.txt Q.txt t.txt V.txt y.txt c.txt E.txt h.txt J.txt m.txt O.txt r.txt T.txt w.txt Y.txt However, if I try to redirect the output to a file, or pipe it to another command, only a single column appears in the output. Using the same example directory as above, here's what I get when I pipe ls to wc: $ ls | wc 52 52 312 In other words, wc thinks there are 52 lines, even though the output to the terminal has only 5. I haven't observed this behaviour in any other command. Would you like to explain this to me?

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • Can't access IIS 7 server URL from the same IIS 7 server.

    - by Kevin Raffay
    We have an intranet site ie, xxx.yyyy.com, that users access by entering "http"://xxx.yyy.com. Our problems started when we migrated to IIS 7 running on a new 2003 server. We got rid of our single-sign on code and implemented a security model where we capture a user's domain credentials which we then authenticate against a DB. In order to get the domain credentials passed to our ASP.NET app, we have the following settings: Anonymous Authentication:Disabled ASP.NET Impersonation: Enabled Basic/Digest/Forms Authentication: Disabled Windows Authentication: Enabled We allow "*" and deny "?" in the web.config. Browsing "http"://xxx.yyy.com from any client PC results in a domain login prompt, and if your enter a proper user/pwd, you can get in. However, browsing "http"://xxx.yyy.com while remoting into the server results in 3 domain login prompts and eventually a 401 error - unauthorized. We have traced this behavior to problems with our web site where we have pages doing "screen scraping" using the HttpRequest calling a url on the same server. When doing a HttpRequest from any other client, using a test harness that passes authorized credentials, all is good. So internal HttpRequest calls on the server fail, just like attempts to browse that server's url from within a remote session. Why would a to "http"://xxx.yyy.com on server xxx.yyy.com fail authentication?

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • HAProxy + NodeJS gets stuck on TCP Retransmission

    - by sled
    I have a HAProxy + NodeJS + Rails Setup, I use the NodeJS Server for file upload purposes. The problem I'm facing is that if I'm uploading through haproxy to nodejs and a "TCP (Fast) Retransmission" occurs because of a lost packet the TX rate on the client drops to zero for about 5-10 secs and gets flooded with TCP Retransmissions. This does not occur if I upload to NodeJS directly (TCP Retransmission happens too but it doesn't get stuck with dozens of retransmission attempts). My test setup is a simple HTML4 FORM (method POST) with a single file input field. The NodeJS Server only reads the incoming data and does nothing else. I've tested this on multiple machines, networks, browsers, always the same issue. Here's a TCP Traffic Dump from the client while uploading a file: ..... TCP 1506 [TCP segment of a reassembled PDU] >> everything is uploading fine until: TCP 1506 [TCP Fast Retransmission] [TCP segment of a reassembled PDU] TCP 66 [TCP Dup ACK 7392#1] 63265 > http [ACK] Seq=4844161 Ack=1 Win=524280 Len=0 TSval=657047088 TSecr=79373730 TCP 1506 [TCP Retransmission] [TCP segment of a reassembled PDU] >> the last message is repeated about 50 times for >>5-10 secs<< (TX drops to 0 on client, RX drops to 0 on server) TCP 1506 [TCP segment of a reassembled PDU] >> upload continues until the next TCP Fast Retransmission and the same thing happens again The haproxy.conf (haproxy v1.4.18 stable) is the following: global log 127.0.0.1 local1 debug maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults log global mode http option httplog option tcplog frontend http-in bind *:80 timeout client 6000 acl is_websocket path_beg /node/ use_backend node_backend if is_websocket default_backend app_backend # Rails Server (via nginx+passenger) backend app_backend option httpclose option forwardfor timeout server 30000 timeout connect 4000 server app1 127.0.0.1:3000 # node.js backend node_backend reqrep ^([^\ ]*)\ /node/(.*) \1\ /\2 option httpclose option forwardfor timeout queue 5000 timeout server 6000 timeout connect 5000 server node1 127.0.0.1:3200 weight 1 maxconn 4096 Thanks for reading! :) Simon

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Kerberos & signle-sign-on for website

    - by Dylan Klomparens
    I have a website running on a Linux computer using Apache. I've employed mod_auth_kerb for single-sign-on Kerberos authentication against a Windows Active Directory server. In order for Kerberos to work correctly, I've created a service account in Active Directory called dummy. I've generated a keytab for the Linux web server using ktpass.exe on the Windows AD server using this command: ktpass /out C:\krb5.keytab /princ HTTP/[email protected] /mapuser [email protected] /crypto RC4-HMAC-NT /ptype KRB5_NT_PRINCIPAL /pass xxxxxxxxx I can successfully get a ticket from the Linux web server using this command: kinit -k -t /path/to/keytab HTTP/[email protected] ... and view the ticket with klist. I have also configured my web server with these Kerberos properties: <Directory /> AuthType Kerberos AuthName "Example.com Kerberos domain" KrbMethodK5Passwd Off KrbAuthRealms EXAMPLE.COM KrbServiceName HTTP/[email protected] Krb5KeyTab /path/to/keytab Require valid-user SSLRequireSSL <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> However, when I attempt to log in to the website (from another Desktop with username 'Jeff') my Kerberos credentials are not automatically accepted by the web server. It should grant me access immediately after that, but it does not. The only information I get from the mod_auth_kerb logs is: kerb_authenticate_user entered with user (NULL) and auth_type Kerberos However, more information is revealed when I change the mod_auth_kerb setting KrbMethodK5Passwd to On: [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1939): [client xxx.xxx.xxx.xxx] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1031): [client xxx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(735): [client xxx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(645): [client xxx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1110): [client xxx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic What am I missing? I've studied a lot of online tutorials and cannot find a reason why the Kerberos credentials are not allowing access.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >