Search Results

Search found 18154 results on 727 pages for 'support multilanguage so'.

Page 598/727 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Hi! Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Cpu usage from top command

    - by kairyu
    How can i get the result like example following. Any command or scripts? Snapshot u1234 3971 1.9 0.0 0 0 ? Z 20:00 0:00 [php] <defunct> u1234 4243 3.8 0.2 64128 43064 ? D 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4289 5.3 0.2 64128 43064 ? R 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4312 9.8 0.2 64348 43124 ? D 20:01 0:00 /usr/bin/php /home/u1234/public_html/index.php u1235 4368 0.0 0.0 30416 6604 ? R 20:01 0:00 /usr/bin/php /home/u1235/public_html/index.php u1236 4350 2.0 0.0 34884 13284 ? D 20:01 0:00 /usr/bin/php /home/u1236/public_html/index.php u1237 4353 13.3 0.1 51296 30496 ? S 20:01 0:00 /usr/bin/php /home/u1237/public_html/index.php u1238 4362 63.0 0.0 0 0 ? Z 20:01 0:00 [php] <defunct> u1238 4366 0.0 0.1 51352 30532 ? R 20:01 0:00 /usr/bin/php /home/u1238/public_html/index.php u1239 4082 3.0 0.0 0 0 ? Z 20:00 0:01 [php] <defunct> u1239 4361 26.0 0.1 49104 28408 ? R 20:01 0:00 /usr/bin/php /home/u1239/public_html/index.php u1240 1980 0.4 0.0 0 0 ? Z 19:58 0:00 [php] <defunct> CPU TIME = 8459.71999999992 This result i got from hostgator support :) I was used "top -c" but they do not show "/home/u1239/public_html/index.php Thanks

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Why is there no /usr/bin/ in windows? Would it be dangerous to the entire Program Files to the path?

    - by dotancohen
    I am a Linux user spending some time in Windows and I'm trying to understand some of the Windows paradigms instead of fighting them. I notice that each program installed in the traditional manner (i.e. via orgasmic installers: Yes, Yes, Yes, Finish) adds the executables to C:/Program Files/foo/bar.exe and then adds a shortcut to the Desktop / Start Menu containing the entire path. However, there is no common directory with links to the software, i.e. C:/bin/bar.exe which would link to C:/Program Files/foo/bar.exe. Therefore, after installing an application the only way to use the application is via the clicky-clicky menus or by navigating to the executable in the filesystem. One cannot simply Win-R to open the run dialogue and then type bar or bar.exe as is possible with notepad or mspaint. I realize that Windows 8 improves on this with the otherwise horrendous Start Screen which does support typing the name of the app, but again this depends on the app having registered itself for such. Would I be doing any harm by adding C:/Program Files recursively to the Windows path? I do realize that there will be name collisions (i.e. uninstall.exe) but could there be other issues?

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • Exchange 2010 update timezone of all calendar items

    - by Andrew
    We are currently operating Exchange 2010 server with Outlook 2010 clients on a ship. We have just changed timezones for the first time in quite a while today. Is there any way to rebase all the calendars and/or update all the calendar items to the new timezone at the same time? I have looked at the following tools already. Microsoft Exchange Calendar Update Configuration Tool - http://www.microsoft.com/en-us/download/details.aspx?id=6266 (Doesn't support exchange 2010) Time Zone Data Update Tool for Microsoft Office Outlook - http://www.microsoft.com/en-us/download/details.aspx?id=17291 The Time Zone Data Update Tool for Microsoft Office Outlook does work for individual users, but has some serious downsides. Including each user needs to run it (approx 400 users), and also it only seems to work on the default account in Outlook 2010, a lot of our users have role accounts as well that we would need to run the tool on. The only way I can find to get this tool to run on the role accounts is to make the role account the default account in outlook, and that in itself is quiet an involved process especially if you have 2 or 3 role accounts. So is there a way to just change all calendar items on our Exchange server to a different timezone in one go? We are a little unique in terms of the whole organisation can change timezones over night, meeting rooms and all, but surely a product as advanced as Exchange 2010 allows us to do what we need.

    Read the article

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • Goal setting/tracking packages for software projects

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I uneasy because I could not find a cross-product "top ten" like review. Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Problem running “Central Administration” website after windows update at Windows 2003 Server Standar

    - by Magdy Roshdy
    I was have WSS 2.0 and then I upgraded to WSS 3.0 and the old instalation database was SQL 2000, now I have another SQL Server instance called:server_name\MICROSOFT##SSEE . After upgrade every thing works fine and our team started to use the portal and we sent lot of documents and make lot of activities on it. The problem started after installing Windows updates the website suddenly stopped and giving me an error "Cannot connect to the configuration database" If I tried to open SharePoint Products and Technologies Configuration Wizard it is gives me a strange error says: "An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information: SharePoint Products and Technologies cannot be configured. The current installation mode does not support SKU to SKU upgrades because there exists an older version of Windows SharePoint Services that must be upgraded first " At this post:http://stackoverflow.com/questions/114398/iis-error-cannot-connect-to-the-configuration-database/249494#249494 the guy of the second answer have the same problem and he suggested a solution but I don't understand well. I tried as he suggested to make the identity of the app pool of the SharePoint web site as "IWAM_server_name " after that the error changed as he said and I web site give me "Server Application Unavailable " and when checked the Event Viewer at the server I found that ASP.NET 2.0 give this exception: "Could not load file or assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Access is denied ." and I don't know how to solve this problem. I'm really want to make my web site working because our team really need these documents and its stuff. I hope I will find some one to help me.

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "[email protected]". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • which virtualization technology is right for me?

    - by Chris
    I need a little help with this getting this sorted out. I want to setup a linux virtual server that I can use to run both sever and desktop systems. I want a linux system that is minimalist in nature as all the main os will be doing is acting as a hypervisor. The system I'm trying to setup will be running a file server, windows 7, ubuntu 10.04, windows xp and a firewall/gateway security system. All the client OS'es accessing and storing files on the file server. Also all network traffic will be routed through the gateway guest os. The file sever will need direct disk access while the other guests can run one disk images. All of this will be running on the same computer so I wont be romoting in to access the guests OS'es. Also if possible I would like to be able to use my triple head setup in the guest OS'es. I've looked at Xen, kvm and virtualbox but I don't know which is the best for me. I'm really debating between kvm and virtual box as kvm seem to support direct hardware access.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • How to Upgrade PHP 5.2 to 5.3 on Windows Plesk Panel 8.2

    - by Jagat Sheth
    I need to upgrade my server PHP version becasue of Wordpress New version not support PHP 5.2. On My server windows 2003 stansard edition x64 with SP1 installed, IIS 6.0, MySQL 4.1, Plesk Panel 8.2. I have follow listed steps on plesk KB. http://kb.parallels.com/6670 How to update PHP 5 on a Windows server with Parallels Plesk Control Panel 8.x and 9.x installed. In order to upgrade PHP 5 to the necessary version (other than shipped with Parallels Plesk), please perform the following steps: Stop Plesk services (‘Control Panel’ and all that are included in the ‘Plesk Run-Time’ section) Rename folder %plesk_dir%\Additional\PleskPHP5 to the orig_PleskPHP5 Create a new folder %plesk_dir%\Additional\PleskPHP5 Download necessary version of PHP, unzip its content, and copy it to the newly created folder PleskPHP5 Copy the file php.ini from the old folder orig_PleskPHP5 to the new one Make sure the permissions are inherited Start Plesk services Click the "Refresh" button in the Components Management section in Parallels Plesk Panel and check if you can see the new PHP version there After follow steps when I open a PHP info it shows me specified module could not be found. If anybody know solution Kindly help me is highly priority. I am very thankful if any one help me to solve this ASAP. Thanks and regards, Jagat Sheth

    Read the article

  • Shoretel Upgrade Path

    - by Brian
    I currently have a Shoretel Server running Server 2003 x32 as a virtual machine paired with a ShoreGear 90 switch and another unused switch of the same model being reserved for manual failover. I am getting the software mailed to me from my partner, but with limited support since the server is in a relatively remote area. I am tempted to upgrade the OS at the same time as performing the upgrade, but want to know if there are any horror stories or advice I should know about before diving in. I'm upgrading from Shoretel 9.2 Build. I will be upgrading first to version 10.1 then finally to 11.1. The system has been bullet proof since it was installed and we are upgrading mainly to get a client that is a little more modern. My question boils down to: Should I even bother with an OS upgrade or even possibly a fresh install of Windows with an install of Shoretel 11.1 and just transfer the configuration? Should I just stay with Server 2003 since it is supported in my target version of Shoretel and the upgrade will be more than enough to keep me busy as a novice?

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • Why does apache httpd tell me that my name-based virtualhosts only works with SNI enabled browers (RFC 4366)

    - by Arlukin
    Why does apache give me this error message in my logs? Is it a false positive? [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I have recently upgraded from Centos 5.7 to 6.3, and by that to a newer httpd version. I have always made my ssl virtualhost configurations like below. Where all domains that share the same certificate (mostly/always wildcard certs) share the same ip. But never got this error message before (or have I, maybe I haven't looked to enough in my logs?) From what I have learned this should work without SNI (Server Name Indication) Here is relevant parts of my httpd.conf file. Without this VirtualHost I don't get the error message. NameVirtualHost 10.101.0.135:443 <VirtualHost 10.101.0.135:443> ServerName sub1.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost> <VirtualHost 10.101.0.135:443> ServerName sub2.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost>

    Read the article

  • What games work well on MacBook Pro (i7/GeForce GT 330M) within VMWare Fusion?

    - by webworm
    I have a 15" MacBook Pro (2.66 i7 with 8 GB RAM) with the GeForce GT 330M 512 MB graphics card. I use it primarily for development (Mac/Web/Windows) though I would like to play the occasional game with my son who uses a desktop PC system at home. I prefer to use VMWare Fusion for virtualization rather than BootCamp for a number of reasons. Heat/Fan issues with i7 under BootCamp Prefer to retain virtual machine as single file rather than dedicated partition (easier to move a nd backup) I have heard that Windows support of the GeForce GT 330 in BootCamp is not all that good. So that being said I was wondering what sort of games I would be able to play within the Fusion environment running Windows 7. I have 8 GB RAM and usually dedicate 4 GB to the virtual machine. I don't expect to be able to play the latest FPS games such as BattleField: Bad Company 2 or Call of Duty, rather I am looking at games such a Total War II, Civilizations IV, Supreme Commander, and other RTS type games. I should mention the native screen resolution of my MacBook Pro is 1680x1050, which is what I would be most likely running the VM at (fullscreen). Thank you for any advice.

    Read the article

  • Nokia E75 Mail for Exchange

    - by Sebastian
    Hi, I have a SBS2003 runing Exchange Server 2003 SP2. My OWA has a godaddy certificate valid for 3 years to come installed. HTTPS works fine for OWA. The certificate has also been copied into the Nokia E95 I am trying to syncronize my Nokia E75 via Mail for Exchange to my mail account on the Exchange server. These are the steps i use: Menu Email New Start Select Internet Gateway Than i enter the details: [email protected] I select company email Mail for Exchange In the domain menu i enter : mydomain In the username/password menu i enter : myusername/mypassword In the server menu i enter : mail.mydomain.com (where the DNS resolves into the server's IP address) In the secure access i select : Internet / Secure / 443 NOTE : port 443 has been opened on my SBOX and forwarded to the exchange server. On IIS default website properties directory security secure communications edit the "Require Secure Channel SSL" is enabled. However, when i try to sync my phone i get the following error code: * Mail for Exch permissions illegal. Check permission configuration. * The phone log gives the following information : Username or Password Illegal. Correct Username and/or Password in the profile options. I've tried speaking with the Phone service support but they cannot identify the problem. Any help will be much apreciated.

    Read the article

  • How to fix GMail time stamps in Outlook?

    - by SWB
    One of my email accounts is hosted at an ISP with unreliable IMAP support, and I can't change it. Fortunately, I have my personal email set up on Google Apps for Domains, so I created another GMail account there and turned on GMail's features that allow me to send and receive mail through the ISP account using GMail ("Send mail as" and "Get mail from other accounts" in GMail settings on the Accounts tab). I'm now using Outlook to retreive mail from the GMail account through IMAP, which in turn is retreiving mail from the ISP account through POP3. This basically works great, except for one very significant issue: Prior to setting this up, I already had several months of mail in the ISP account that I had been accessing via IMAP. GMail grabbed all of this mail via POP3 at, let's say, noon on April 5. In GMail's web interface (and on my iPod touch, and in Mozilla Thunderbird), all is well: the messages are all shown with their original time stamps. But when Outlook downloads these messages from GMail via IMAP, the time stamps are all set to noon on April 5 (the time GMail downloaded them from the ISP via POP3). That's not good, especially since we're talking about hundreds of messages here over a time span of several months. How can I fix this and get Outlook to display the original time stamps?

    Read the article

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system fails

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >