Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 532/1167 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • javac compiler throwing error in CentOS 5.7

    - by Julio Menendez
    I'm trying to install Red5 on a vps running CentOS 5.7 in MediaTemple using this how-to (dv):Install Red5 Media Server but on step 7 I get this error: BUILD FAILED /usr/local/red5/build.xml:217: The following error occurred while executing this line: /usr/local/red5/build.xml:238: Error running /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/javac compiler Any idea how to fix it? Already Google and several users have had this same issue but none posted the solution or they never solved. UPDATE: Some more details: running ant -v dist shows that is a memory problem: Caused by: java.io.IOException: Cannot run program "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/javac": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at java.lang.Runtime.exec(Runtime.java:610) at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:862) at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:481) at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:495) at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:522) ... 32 more Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 37 more and I defined _JAVA_OPTIONS="-Xms64m -Xmx128m" and export ANT_OPTIONS=-Xmx128m and I tried with Xmx512m in both cases, no luck. Thanks!

    Read the article

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • Multi domain on my dedicated server with Apache2

    - by x4vier
    I setup a server with Ubuntu 10.04 server edition. It's works for a long time with a single domain name. Now i want to add another domain wich will pointed to a new directory. I tried to change my Apache2 configuration but it does not seems to work properly. Here is my /etc/apache2/sites-available/default <VirtualHost *:80> DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com DocumentRoot /var/www/mydomain </VirtualHost> here is my /etc/hosts 127.0.0.1 localhost **.***.133.29 sd-***.****.fr sd-**** **.***.133.29 mediousgame.com # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ****::0 ip6-localnet ****: :0 ip6-mcastprefix ****::1 ip6-allnodes ****::2 ip6-allrouters ****::3 ip6-allhosts With this configuration when i try to access to mydomain it redirect to the /var/www/ content. Do you have any idea to redirect to the right folder ?

    Read the article

  • Problem installing SQLite3 RubyGem on Ubuntu

    - by misbehavens
    I am having a problem trying to install the SQLite3 RubyGem. Here's what I'm doing: $ sudo gem install --remote sqlite3-ruby Here's the output: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for fdatasync() in -lrt... yes checking for sqlite3.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • How to replace the domain name in a Wordpress database?

    - by Cristian
    I have a Wordpress database which was installed in a development environment... thus, all references to the site itself have a fixed IP address (say 192.168.16.2). Now, I have to migrate that database to a new Wordpress installation on a hosting. The problem is that the SQL dump contains a lot of references to the IP address, and I have to replace it with: my_domain.com. I could use sed or some other command to change the that from the command line, the problem is that there are a lot of configuration data which uses JSON. So what? Well, as you know, JSON arrays uses things like: s:4: to know how many chars an element has, and thus, if I just replace the IP with the domain name, the configuration files will get corrupted. I used an app for Windows some years ago that allows to change values in a database and takes care of the JSON arrays. Unfortunately, I forgot the name of the app... so the question is: do you know any app that allows me to do what I want?

    Read the article

  • LVM and cloning HDs

    - by jcea
    Using Linux, I have several backup levels. One of them is a periodical sector by sector copy (using dd) of my laptop harddisk to an external USB disk. Yes, I have other backups too, like remote rsync. This approach (the disk dd) is OK when cloning a HDD with no LVM volumes, since I can plug the external disk anytime and mount the partitions simply mounting /dev/sdb* instead of /dev/sda*. Trivial and handy. Today I moved ALL my harddisk (including the /boot) to LVM. Everything works fine. I will stress it for a couple of days, and then I will do a sector by sector copy to my external harddisk. Now I have a problem, I guess. If in the future I plug the external USB HDD to recover any file, the OS will detect a duplicate LVM configuration, with the same name and the same UUID. Even doing a vgrename (which LVM would be renamed, the internal HDD or the external HDD?), the cloned UUID will not change. Is there any command to change name and UUID? Ideally I would clone the HDD and then change the LVM group name and its UUID, but I don't know how to do it. Another related issue would be... In the past I have booted my laptop using the external disk, using the BIOS boot menu and changing GRUB entries manually to boot from /dev/sdb instead of /dev/sda. But now my current GRUB configuration boots directly from a LVM logical volume, something like: set root='(LVM-root)' in my grub.cfg. So... What is going to happen with duplicated volumes? Any suggestion? I guess I could repartition my external harddisk and change backup strategy from dd to rsync, but this disk has windows installed too, and I really would like to have a physical "real" copy.

    Read the article

  • Using proxy.pac to access Apache 2 with a hostname?

    - by leeand00
    Note that I do not have a DNS on my network, and that is why I am resorting to using a proxy.pac file. I would like to be able to access my development Apache 2 server using a name instead of an ip without setting up a full blown DNS. I am aware of setting names in the C:\Windows\System32\drivers\etc\hosts file and the /etc/hosts files, however I cannot edit the hosts file on all of the devices that I am testing the site on. I've added a proxy.pac file to my Apache2 server and pointed my browsers settings to it at: http://192.168.2.221/proxyutils/proxy.pac ...where 192.168.2.221 is thehostname's ip address. I set the above URL in Firefox in the following manner: From the menubar selecting "Edit-Preferences" In the resulting "Firefox Preferences" window clicking the "Advanced" tab. Clicking the "Network" tab Clicking the "Settings" button. Selecting the "Automatic proxy configuration URL:" radio button. Entering http://192.168.2.221/proxyutils/proxy.pac and pressing OK. The contents of the proxy.pac file on the Apache server function FindProxyForURL(url, host) { if( dnsDomainIs(host, "thehostname") ) return "PROXY 192.168.2.221:80"; return "DIRECT"; } In Firefox I then access the following URL: http://thehostname/wp-blog/ And instead of the development version of the Wordpress blog I am trying to access I get a URL of http://thehostnamehttp/thehostname/wp-blog/ in my address bar and a 404 Not Found page in the browser window. Looking over proxy.pac, it seems like calling dnsDomainIs shouldn't work considering I don't have a DNS setup on my network, but I've also tried just comparing the host argument with the string "hostname" and it yielded the same result, even after modifying the proxy.pac file and clicking the reload button near the proxy settings. This could also be a Wordpress problem, since I've noticed that directories without Wordpress seem to function perfectly normally. (see cross post here) Is there any way I can modify my configuration so that I can access the site using http://thehostname/wp-blog/ ?

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • Relation between server_name in nginx sites-available, /etc/hosts file and A-records

    - by user2818584
    I have the following two server-blocks in my config-file in sites-available: server { listen 80; server_name www.mydomain.be; root /usr/share/nginx/html; index index.html index.htm; location / { try_files $uri $uri/ =404; } } server { listen 80; server_name sub.mydomain.be; root /usr/share/nginx/sub; index index.html index.htm; location / { try_files $uri $uri/ =404; } } I also created an A-record for both www.domain.be and sub.domain.be with the IP of my server as value. Yet, when I try to reload my nginx configuration with service nginx reload it fails. When I remove the second server-block, it reloads as expected. I know this topic is popular, and that there are loads of such [nginx][subdomain] questions here, but none of them seems to discuss explicitly how the following three things hang together: virtual hosts or server blocks in nginx (est. server_name matching) the effect of A-records on how nginx processes requests the need to add hosts to /etc/hosts Right now I have the impression that a lack of knowledge of this bigger picture, rather than specific knowledge of nginx configuration prevents me from making this work.

    Read the article

  • ADFS Relying Party

    - by user49607
    I'm trying to set up an Active Directory Federation Service Relying Party and I get the following error. I've tried modifying the page to allow <pages validateRequest="false"> to web.config and it doesn't make a difference. Can someone help me out? Server Error in '/test' Application. A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. To allow pages to override application request validation settings, set the requestValidationMode attribute in the httpRuntime configuration section to requestValidationMode="2.0". Example: <httpRuntime requestValidationMode="2.0" />. After setting this value, you can then disable request validation by setting validateRequest="false" in the Page directive or in the <pages> configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. For more information, see http://go.microsoft.com/fwlink/?LinkId=153133. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").] System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection) +11309476 System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection) +82 System.Web.HttpRequest.get_Form() +186 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.IsSignInResponse(HttpRequest request) +26 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.CanReadSignInResponse(HttpRequest request, Boolean onPage) +145 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +108 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +266 `

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Get Internal IP Address From DHCP Hostname

    - by ell
    I would like to try and get an internal ip address of one of the computers on my network. The reason for this is I have a little home server box downstairs but every time I want to SSH into it I have to open my router configuration and go on the DHCP client table and look at the IP address. For example I would like to be able to go ssh ell-sever instead of ssh 192.168.1.105 or whatever it happens to be. My network configuration is like so: Router downstairs that is connected to the Internet and is running a DHCP server My server computer (ell-server) is a headless pc connected to the router via ethernet cable. Running Ubuntu 11.04 Server Edition My laptop upstairs (ell-laptop) that is running Ubuntu 11.10 Desktop Edition connected wirelessly Other (irrelevant) computers - 2 x Windows XP, 1 x Xubuntu - all connected with cables. (It seemed to me the method of connection isn't useful information but I put it in anyway - just in case. If I have missed any information please tell me) Do I have to run a DNS server on one of my computers? If so which one? And does that mean I will have to run a DDNS client on each computer? Thanks in advance, ell.

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • keepalived issues on xen domU

    - by David Cournapeau
    Hi, I cannot manage to run keepalived correctly on xen domU. I am following this link for configuration, and it works great on some local VM (running with KVM). If I set up the exact same configuration, but on xen domU, it does not work: both servers do not see each other and decide to be master (10.120.100.99 being the virtual IP) $ sudo ip addr sh eth0 # host1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:78:f5:31 brd ff:ff:ff:ff:ff:ff inet 10.120.100.104/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe78:f531/64 scope link valid_lft forever preferred_lft forever $ sudo ip addr sh eth0 # host2 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:51:36:20 brd ff:ff:ff:ff:ff:ff inet 10.120.100.105/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe51:3620/64 scope link valid_lft forever preferred_lft forever Is there a way I could debug this - it seems some people are able to use keepalived on xen following some mailing list, but without much info on their config.

    Read the article

  • postfix and iRedMail- Relaying Denied

    - by Lock
    I am trying to setup iRedMail and am way over my head here. I have installed it, and can send emails internally, but not externally. When I send an email from outside, I get the following return email: The error that the other server returned was: 550 550 5.7.1 <[email protected]>... Relaying denied (state 13). Now I have no idea where to start looking! Any ideas? I have really only just installed iRedMail so I am unsure what else I need to do to get it working. I've pointed my MX records to that server, so that shouldnt be the problem. Also- if i stop postfix (so nothing is listening on port 25) and send a test email, I get the same reply back. Why would I get the same reply back even if postfix is stopped? I have run tcpdump over 25 and can see the packets coming in/out, so its definitely a configuration issue! I suppose my question is not really "what is my problem", but more "What configuration needs to be completed on postfix and iRedMail?"

    Read the article

  • Exchange 2013 attachments too big?

    - by KPS
    I am having the toughest time sending large attachments, everywhere I have checked my file size limit for send/receive is 100mb but yet users are unable to receive files even at the size of 14mb. I'm using a spam filter (Appriver) and have worked with there support for a very long time, we see the following errors in logs 13:32:40.260 4 SMTP-000036([myserverIP]) rsp: 354 Start mail input; end with <CRLF>.<CRLF> 13:33:41.038 3 SMTP-000033([myserverIP]) write failed. Error Code=connection reset by peer 13:33:41.038 3 SMTP-000033([myserverIP]) [659500] failed to send. Error Code=connection reset by peer 13:33:41.038 4 SMTP([myserverIP]) [659500] batch reenqueued into tail Windows firewall is disabled on the exchange server, all other emails that are of smaller value come through just fine. Here is a print out of size limits: ConnectorType ConnectorName MaxReceiveMessageSize MaxSendMessageSize ------------- ------------- --------------------- ------------------ Send InternetSendConnector - 35 MB (36,700,160 bytes) Send Appriver-Smarthost - 35 MB (36,700,160 bytes) Receive Default EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) - Receive Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive ExchangeRelay 100 MB (104,857,600 bytes) - TransportConfig - 100 MB (104,857,600 bytes) 10 MB (10,485,760 bytes) ADSiteLink DEFAULTIPSITELINK Unlimited Unlimited There is a no anti-virus on the server either that could be interfering, I am out of ideas at this point :( EDIT 1 After running BPA, it gives and error: Exchange Organization: Check whether the incoming message(CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local) size isn't set The maximum incoming message size isn't set in organization 'CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local'. This can cause reliability problems. Here are the sizes as of now: [PS] C:\Temp>Get-TransportConfig | ft MaxSendSize, MaxReceiveSize MaxSendSize MaxReceiveSize ----------- -------------- Unlimited Unlimited [PS] C:\Temp>Get-ReceiveConnector | ft name, MaxMessageSize Name MaxMessageSize ---- -------------- Default EXCHSRVR 100 MB (104,857,600 bytes) Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) ExchangeRelay 100 MB (104,857,600 bytes) Again, smaller emails come through just fine. Seems like there is a 10mb receive limit somewhere that I cannot find.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >