Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 588/763 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • Vista - Profile not Loaded Correctly (Cannot Access Registry)

    - by Geoff
    Every so often, I log on and get the Following Message: User profile was not loaded correctly. You have been logged on with a temporary profile. Changes you make to this profile will be lost when you log off. Please see the event log for details or contact your administrator This almost always happens when somebody else has been on the computer for a while, and then I log on. This never used to happen, but now it happens pretty often. My profile is not permanently corrupted, all I have to do is restart my computer, but this annoys me, and I would like to fix it. I was curios about the reason of this cause, so I looked into the Event Log, and found the root of the problem was the ntuser.dat file in the profile that I was logging on to was locked at logon time. This resulted in the current users registry not being loaded, resulting in failure to load the profile. What could be locking this file? is there any way to get a process list without logging on so that I can identify which process has the file locked? Any other suggestions. Hopefully I can find a solution.

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Some free cloud solution to enhance your business

    - by Saif Bechan
    I am co-owner of a small internet business. I am in charge of IT, and I try to get things done as low cost as possible. When investing in servers, resources and overall business costs your project can soon turn into a financial disaster. Cloud solutions can help you in solving some financial problems, they can help you in scalability problems, and overall performance problems of your server or web application. Recently I moved the whole internal/external communication(email,calendar,documents) of my business to the cloud. I did this by using the free version of Google Apps. This works great and is a big advantage on multiple levels. I do not have to fight spam anymore on my system, and there are less resources used on my system. Also switching servers will go a lot easier. Questions Can you name some cloud solution that you have used, or some you just recommend. They can fairy form financial benefits, organizational benefits, performance benefits. It doesn't matter as soon as it helps you spread the load of your business.

    Read the article

  • Which is more secure: Tomcat standalone or Tomcat behind Apache?

    - by NoozNooz42
    This question is not about performance, nor about load-balancing, etc. Which would be more secure: running Tomcat in standalone mode or running Tomcat behind apache? The thing is, Tomcat is written in Java and hence it is pretty much immune to buffer overrun/overflow (unless a buffer overrun in a C-written lib used by Tomcat can be triggered, but they're rare [the last I remember was in zlib, many many moons ago] and one heck of a hack to actually exploit), which gets rid of a lot of potential exploits. This page: http://wiki.apache.org/tomcat/FAQ/Security has this to say: There have been no public cases of damage done to a company, organization, or individual due to a Tomcat security issue... there have been only theoretical vulnerabilities found. All of those were addressed even though there were no documented cases of actual exploitation of these vulnerabilities. This, combined with the fact that buffer overrun/overflow are pretty much non-existent in Java, makes me believe that Tomcat in standalone mode is pretty secure. In addition to that, I can install both Java and Tomcat on Linux without needing to be root. The only moment I need to be root is to set up a transparent port 8080 to port 80 forwarding (and 8443 to 443). Two iptables line as root, that's all root is needed for. (I don't know for Apache). Apache is much more used than Tomcat and definitely does not have a security track record as good as Tomcat. What would make Tomcat + Apache more secure? What would make Tomcat + Apache less secure? In short: which is more secure, Tomcat standalone or Tomcat with Apache? (remembering that performance aren't an issue here)

    Read the article

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • Firefox not using Kerberos despite being configured to

    - by Nicolas Raoul
    I am deploying Linux/Firefox on a corporate Kerberos network. I followed this Kerberos-on-Firefox procedure but still Firefox does not connect via the company's Kerberos. I am using Firefox 3.0.18 on RedHat EL Server 5.5 Here is what I did: Run kinit on the command line to create a Kerberos ticket Check with klist: the ticket is valid until tomorrow, service principal is krbtgt/[email protected]. In Firefox, set network.negotiate-auth.trusted-uris and network.negotiate-auth.delegation-uris to .dc.thecompany.com. Load the company's portal page via its full hostname: http://server37.thecompany.com/alfresco. (note: server37 is actually the machine I am running Firefox on, but that should not be a problem I guess) PROBLEM: the company's intranet portal still serves me the login/password page. The same portal correctly uses Kerberos on Internet Explorer/Windows 7 machines, same settings, and shows the user's personal page. The server does not see any Kerberos request coming. Did I do something wrong? I enabled NSPR_LOG_MODULES=negotiateauth:5 as explained here, but the log file stays empty.

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • FastGate A20 Line And Himem.sys Issue With Updating BIOS

    - by Boris_yo
    I have been persistent with a thought to perform my first BIOS update ever through MS-DOS but have been postponing this task until today. Despite people telling me any bootable ISO will do it either through CD-ROM or RAMDRIVE, I am still having problems. First is the problem with CD-ROM driver trying to make it work with 4 driver files (cd1.SYS, cd2.SYS, cd3.SYS, cd4.SYS) as well as starting RAMDISK proved to be failure: CD-ROM XMS Allocation Error RAMDISK XMS Allocaton Error (X: and R: drives not working) This A20 line seemed to be the obstacle which then after a couple of searches pointed me to this article on Microsoft website. It seems that FastGate is the culprit which takes over A20 line and conflicts with himem.sys which should be handling it causing the driver to be unable to allocate memory resources. Albeit article suggests 2 workarounds which is disabling FastGate option or adding switch, I read that the former workaround could cause problems which involves later tinkering BIOS, disabling shadow copy etc. while the latter workaround can just hang system as stated in the link above. I assume it just hangs the boot process from image file though. Summing up the above, I am cautious and think it is risky to follow both workarounds because disabling FastGate or trying adding switch by trying available switches from 1-14 or 16, could crash the BIOS update process by itself. I could do this without the need for himem.sys with bootable USB thumbdrive by making it to be seen as USB-HDD, but some time ago I read that it is never a good idea to update BIOS from hard drive so even thought it is simulation, who knows... Maybe it will deactivate hard drive in the middle of the BIOS update process or even USB thumbdrive per se? One forum discussion was about updating BIOS and somebody suggested to not load himem.sys for some reason, but now that I think of it, what if BIOS update needs upper memory?

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • Why is my mono/XSP site loading slow?

    - by acidzombie24
    I have two sites on the same server. One is loading perfectly and incredibly fast. The other is an equally complex site except a bit less javascript and 0 images. Its taking several seconds to load and there is a 1 in 3 chance that i get a Http500 error. WTF I grabbed the lastest 2.6.? version of mono, mod_mono and xsp (libgdiplus-2.6.7, xsp-2.6.5, mod_mono-2.6.3 and mono 2.6.7) This is whats in apache error.log [Mon Jan 03 19:33:40 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:33:40 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 this is the page error Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch mod_mono/2.6.3 Server

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • installed mysql using yum but mysqld dowsnt start: Fedora 16

    - by Sumit Singh Bir
    i installed mysql mysql-server and mysql-libs ... after installation i thought to start the mysql service using command systemctl start mysqld.service Failed to issue method call: Unit mysqld.service failed to load: Invalid argument. See system logs and 'systemctl status mysqld.service' for details. systemctl status mysqld.service this said that it had an invalid argument then i changed the content of mysqld.service with this File: /etc/systemd/system/mysqld.service [Unit] Description=MySQL database server After=syslog.target After=network.target [Service] User=mysql Group=mysql ExecStart=/usr/sbin/mysqld --pid-file=/var/run/mysqld/mysqld.pid ExecStop=/bin/kill -15 $MAINPID PIDFile=/var/run/mysqld/mysqld.pid # We rely on systemd, not mysqld_safe, to restart mysqld if it dies Restart=always # Place temp files in a secure directory, not /tmp PrivateTmp=true [Install] WantedBy=multi-user.target now i found that the error for invalid argument was resolved and mysqld.service was loaded but not enabled ... using systemctl start mysqld.service worked fine ... it workedddd! but then enabling it systemctl enable mysqld.service or service mysqld start did not do anything but the curdor kept on blinking after i pressed enter ... now the thing is that for this f16 i wasted my HDD space for development work and i cannot figure out a way ... please somebody help

    Read the article

  • Firefox proxy authentication with Kerberos: one service ticket per connection (Linux)

    - by Dari
    I am trying to enable proxy authentication via Kerberos for Firefox. The setup is: Active Directory domain (for LDAP and Kerberos; this works and I can log in the computer and get Kerberos tickets without problems) Microsoft Windows witness machine (on which Firefox runs fine with no ticket problem) CentOS 6.3 system with Firefox (the tests were performed with both the 10.0.1 ESR found in the CentOS package repositories and the 15.0.1 downloaded from Mozilla's website) BlueCoat proxy with Kerberos authentication enabled For the moment, Firefox requests an element of a website, gets an HTTP error code of "407 Proxy Authentication Required" from the proxy, gets a ticket granting service (TGS) from the domain for the proxy and performs the request again while passing the ticket. The transaction runs fine. However, when more elements are requested (in parallel), Firefox requests one more ticket per proxy connection. And this takes many DNS queries, Kerberos interactions with domain controllers and costs a lot of time (for example, the home page of Adobe takes several minutes to load and at the end, I have about 30 valid Kerberos tickets). I am stuck on this since a while, and help would be greatly appreciated. Minor information: the CentOS operating system is virtualized with VMware Player 3.1.3, but I do not think this would be a game changer.

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Excel fails to open Python-generated CSV files

    - by johnjdc
    I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely." Example of code I'm using to generate the CSV files: import csv with open('csv_test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow([1,2,3]) writer.writerow([4,5,6]) Even the simple file generated by the above code fails to load properly in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.

    Read the article

  • How do I get basic ProxyPass to work on Apache 2.2.17?

    - by Ansis Malins
    I'm trying to get around the ERR_UNSAFE_PORT restriction in Chrome by making Apache reverse proxy other HTTP servers on the machine. I load mod_proxy with sudo e2enmod proxy I add ProxyPass /znc/ http://localhost:6667/ to my httpd.conf I restart Apache with sudo /etc/init.d/apache2 restart When I open up /znc/, I get 500 Internal Server Error. I added LogLevel debug, restarted apache, tried again, and got nothing suspicous: [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21528 for worker http://localhost:6667/ [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21528 for (localhost) [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21528 for worker proxy:reverse [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21528 for (*) [Fri Oct 19 18:55:17 2012] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.8 configured -- resuming normal operations [Fri Oct 19 18:55:17 2012] [info] Server built: Feb 14 2012 17:59:20 [Fri Oct 19 18:55:17 2012] [debug] prefork.c(1018): AcceptMutex: sysvsem (default: sysvsem) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21532 for worker http://localhost:6667/ [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker http://localhost:6667/ already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21532 for (localhost) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21532 for worker proxy:reverse [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker proxy:reverse already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21532 for (*) So I'm stumped at this point. What to do? I'm running Ubuntu Server 11.10. ZNC responds with a correct 200 OK and HTML when queried directly both from the local machine and the Internet.

    Read the article

  • Is the exhaust fan necessary?

    - by Borek
    On my new PC, the component making the most noise is the rear exhaust fan on my case (it is the only exhaust fan in my PC). I tried to disconnect it and watched temperatures in SpeedFan and CPU was usually at about 35C, peaking to about 50C when the system was under load - this doesn't look too bad. So I'm considering that I'll leave the exhaust fan disconnected permanently after which the computer is very quiet - the only noise-making components are Arctic Cooling Freezer 7 Pro Rev.2 (CPU fan) and PSU fan (Enermax Pro 82+), both being quiet enough as far as I can tell. (My GPU has a passive cooler.) Also, those 2 components are moving parts so will provide some air flow in the case and, even better, PSU fan sucks the air out of the case so it kind of is an exhaust fan in itself. Does anyone run with the exhaust fan disconnected? You don't have to tell me that it's always better to have more air flow than less, I know that, but the noise is also a consideration for me and temperatures around 40C should be fine shouldn't they? (I might also consider getting a quieter case fan but I'm specifically interested in your opinion on the no exhaust fan scenario.)

    Read the article

  • How do I fix a super slow MacBook?

    - by MakingScienceFictionFact
    I'm running a black MacBook 4.1. Intel Core 2 Duo @ 2.4 GHz, 2 GB RAM, 250 GB hard disk drive, bus speed is 800 MHz. It's about three years old in excellent shape externally. I treat this thing like a baby. It used to run awesome, but now it's super slow at everything. I get the spinning pizza of death constantly. It takes a long time to boot up or load any program, even Safari and iTunes. iPhoto is terribly slow. The Internet doesn't work properly and it reminds me of a buggy PC. I've formatted it and re-installed Mac OS X 10.6 (with all updates), and I've done the disk repairs process. As an iOS developer this is driving me crazy, but luckily I have an iMac to work on in the day which is fast. I'm ready to format it again, but that didn't work last time. After the last format, I copied back files from an external drive so maybe the offending files were hidden in there somewhere. Here are the hard disk drive and RAM specifications. It is upgrade-able to 4 GB of RAM. Hard disk drive: The Fujitsu Mobile MHY2250BH is a 250 GB, standard hard disk drive. Its burst transfer rate is 150 Mbyte/s. This is a 5400 RPM drive and comes with an 8 MB buffer. RAM: two sticks of 1 GB DDR2 SDRAM, speed: 667 MHz.

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >