Search Results

Search found 25009 results on 1001 pages for 'content encoding'.

Page 815/1001 | < Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >

  • Moodle serves on IP only - will not work with mod_proxy

    - by Jon H
    I'm trying to set a moodle server up on an Ubuntu box, which already serves Plone & Trac via Apache. In my Moodle config I have $CFG-wwwroot = 'http://www.server-name.org/moodle' The configuration below works fine for the first two, but when I visit www.server-name.com/moodle I get: Incorrect access detected, this server may be accessed only through "http://xxx.xxx.xxx.xxx:8888/moodle" address, sorry It then forwards to the IP address, where Moodle functions fine. What am I missing to get the server name approach working correctly? Apache Config follows: LoadModule transform_module /usr/lib/apache2/modules/mod_transform.so Listen 8080 Listen 8888 Include /etc/phpmyadmin/apache.conf <VirtualHost xxx.xxx.xxx.xxx:8080> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On <Location /> ProxyPass http://127.0.0.1:8082/ ProxyPassReverse http://127.0.0.1:8082/ </Location> </VirtualHost> <VirtualHost xxx.xxx.xxx.xxx:80> ServerName www.server-name.org ServerAlias server-name.org ProxyRequests Off FilterDeclare MyStyle RESOURCE FilterProvider MyStyle XSLT resp=Content-Type $text/html TransformOptions +ApacheFS +HTML TransformCache /theme.xsl /home/web/webapps/plone/theme.xsl TransformSet /theme.xsl FilterChain MyStyle ProxyPass /issue-tracker ! ProxyPass /moodle ! <Location /issue-tracker/login> AuthType Basic AuthName "Trac" AuthUserFile /home/web/webapps/plone/parts/trac/trac.htpasswd Require valid-user </Location> Alias /moodle /usr/share/moodle/ <Directory /usr/share/moodle/> Options +FollowSymLinks AllowOverride None order allow,deny allow from all <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> </Directory> </VirtualHost>

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    Note: I originally asked this question on Server Fault (http://serverfault.com/questions/148052/automatically-starting-svnserve-on-snow-leopard), but I thought this may be a more appropriate place to ask. I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> I have tried many different configs in the .plist, including auto-starting, simplifying the listeners section, removing dependence on inetd, but they all show the same symptom. The files work when started using launchctl load, but do not automatically start up svnserve if the iMac is rebooted. If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • Can't reset Windows 7 Registry permissions.

    - by n10i
    hi all, i am trying to reset win 7 registry permissions using secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose /areas REGKEYS But i am receiving the following error: An extended error has occurred. The task has completed with an error. See log %windir%\security\logs\scesrv.log for detail info. The content Of the log file: ------------------------------------------- Friday, April 16, 2010 1:50:43 PM ----Configuration engine was initialized successfully.---- ----Reading Configuration Template info... ----Configure 64-bit Registry Keys... Configure users.default. Warning 5: Access is denied. Error taking ownership of users.default\software\SetID. Warning 5: Access is denied. Error opening users.default\software\SetID. Warning 5: Access is denied. Error setting security on users.default\software\SetID. Configure machine\software. Warning 5: Access is denied. Error setting security on machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Macrovision. Configuration of Registry Keys was completed with one or more errors. ----Configure 32-bit Registry Keys... Configure machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Audible. Configuration of Registry Keys was completed with one or more errors. ----Un-initialize configuration engine... plz! help me guys!

    Read the article

  • Google Desktop Search problems - unable to disable indexing and no index status page

    - by Howiecamp
    I've been running Google Desktop (I've used both the regular and Enterprise versions) without issues on my Windows 7 64-bit PC for a long time with no issues. I just paved my PC and reinstalled Google Desktop (regular edition). Initially, I want to use it only for the CTRL-CTRL launcher capability - I don't want it to do any indexing. So I unchecked all the items in the configuration dialog as well as excluded c:. Despite this, it is indexing all the content on my hard drive as well as Outlook email, etc. In my previous installations I had the same options set and they worked properly. The other problem I have is that when you right-click on the Google Desktop system tray icon, there used to be an option for "Index Status" which showed you the percentage complete the index creation process was, as well as an option to clear and rebuild the index. Those are no longer there. I uninstalled the regular edition and installed the Enterprise Edition, but same problems. Any ideas?

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • If spambots are filling out forms on my website, should I mark the mail as spam?

    - by rob
    Apparently spambots have discovered the contact forms my website. I'm considering adding a CAPTCHA, but in the meantime I've been getting dozens of completed website forms in my Gmail account each day. I know I can configure a filter to always allow e-mails from my website to be delivered (i.e., "never send to spam"), but I'm wondering whether there are also larger implications. At first, I was marking the spam forms as spam because I figured it would block the sender's e-mail address from getting through again. But since then, I've given it some more thought. I know many spam filters use Bayesian filters and other techniques to identify spam based on its content. If I mark all these e-mails as spam, will legitimate e-mails from my website be marked as spam? Even if I stop marking the messages as spam, I can see another potential problem. My form is pretty standard (in fact, it's probably very similar to other online forms). If I mark my spambot-submitted website forms as spam, will Gmail begin to mark other people's similar-looking messages as spam?

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • When running PowerShell script as a scheduled task some Exchange 2010 database properties are null

    - by barophobia
    Hello, I've written a script that intends to retrieve the DatabaseSize of a database from Exchange 2010. I created a new AD user for this script to run under as a scheduled task. I gave this user admin rights to the Exchange Organization (as a last resort during my testing) and local admin rights on the Exchange machine. When I run this script manually by starting powershell (with runas /noprofile /user:domain\user powershell) everything works fine. All the database properties are available. When I run the script as a scheduled task a lot of the properties are null including the one I want: DatabaseSize. I've also tried running the script as the domain admin account with the same results. There must be something different in the two contexts but I can't figure out what it is. My script: Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010 Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Starting script" $databases = get-mailboxdatabase -status if($databases -ne $null) { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Object created" $databasesize_text = $databases.databasesize.tomb().tostring() if($databasesize_text -ne $null) { $output = "echo "+$databasesize_text+":ok" Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path check" if(test-path "\\mon-01\prtgsensors\EXE\") { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path valid" Set-Content \\mon-01\prtgsensors\EXE\ex-05_db_size.bat -value $output } Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Exiting program" } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "databasesize_text is empty. nothing to do" } } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "object not created. nothing to do" } exit 0

    Read the article

  • Need assistance making a batch file for renaming files in separate folders

    - by Carnaxus
    Ok, here's one for you. I'm trying to use a batch file to rename a bunch of files, but none of them are in the same folder as the batch file itself. The command prompt keeps telling me that the directory can't be found. I suppose I could just rename all the files in all the folders that match the filename, but I don't want to do that either; I only want to change certain ones. My batch file as it stands is: @echo off ren "engine/info.txt" "disabled.txt" ren "gravplating/info.txt" "disabled.txt" ren "HAWX content/info.txt" "disabled.txt" ren "laserz/info.txt" "disabled.txt" ren "NeuroNaval/info.txt" "disabled.txt" ren "NeuroPlanes/info.txt" "disabled.txt" ren "NeuroTanks/info.txt" "disabled.txt" ren "NeuroWeapons/info.txt" "disabled.txt" ren "WAC Base/info.txt" "disabled.txt" ren "WAC DamageSystem/info.txt" "disabled.txt" ren "WAC GravityController/info.txt" "disabled.txt" ren "WAC Helicopters/info.txt" "disabled.txt" ren "WAC Sweps/info.txt" "disabled.txt" ren "weapons/info.txt" "disabled.txt" ren "AFF_ships/info.txt" "disabled.txt" ren "AntiTakeRifle/info.txt" "disabled.txt" ren "Catmull-Rom Cameras/info.txt" "disabled.txt" ren "Displacer Cannon/info.txt" "disabled.txt" ren "Drumdevil's Trains/info.txt" "disabled.txt" ren "EVEOnline/info.txt" "disabled.txt" ren "gm_botmap_v3/info.txt" "disabled.txt" ren "gm_construct_flatgrass_v5-2/info.txt" "disabled.txt" ren "gm_mobenix_v3_final/info.txt" "disabled.txt" ren "gm_mobenix_v3_highquality_Water/info.txt" "disabled.txt" ren "gm_snabbansairfield_b1/info.txt" "disabled.txt" ren "gm_XhS_construct/info.txt" "disabled.txt" ren "linedraw/info.txt" "disabled.txt" ren "ModelManipulator/info.txt" "disabled.txt" ren "NeuroCars/info.txt" "disabled.txt" ren "Propeller Engine/info.txt" "disabled.txt" ren "VanDookie and Predaaator's pack/info.txt" "disabled.txt" ren "WAC ECM/info.txt" "disabled.txt" ren "WAC Extra Helicopters/info.txt" "disabled.txt" echo Done! pause

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Tomcat performing terribly for no apparent reason

    - by John
    We're running a game application .WAR on Tomcat 6 on an Amazon EC2 server, 8 core processor, 7gb RAM. The application uses a MySQL database hosted on Amazon RDS. This Facebook application takes ages to access when a mere 20-30 users are playing it. Big difference from 1-2 users. The entire .WAR is ~4mb, all static content hosted elsewhere. The server has never been close to running out of RAM. The CPU utilization has never been higher than 13.5-14%. Even with ~500 users that completely slowed everything to a standstill. The thread count or threadpools isn't close to being maxed out. I heightened maxthreads but it didn't make a noticeable difference. My theory is that Tomcat can only use one processor core, which would explain why it was slowed to a halt even though CPU usage was stably at 13-14% at the activity spike. But I'm struggling to understand why it would only use one CPU core. There is no processor cap in server.xml. The app contains several servlets (4 or 5). There is no mention of SingleThreadModel in the Java code. WHAT could be causing the application to run extremely slowly? If there is only 1-5 people on the application it runs fine. With 20-30 people it's barely contactable.

    Read the article

  • Can't seem to get python to work

    - by Justin Johnson
    I'm just starting out in Python. The Python interpreter works from the command line (I have 2.4.3), but I can't seem to get Apache to execute Python scripts. All I end up with is a blank screen and nothing in the Apache error logs. I enabled Python via the Plesk control panel. Here's the snippet that was generated in the httpd.include: <Files ~ (\.py$)> SetHandler python-program PythonHandler mod_python.cgihandler </Files> My test script is one of the examples that comes with the Python downloads at http://python.org/download/ #!/usr/local/bin/python """CGI test 1 - check server setup.""" # Until you get this to work, your web server isn't set up right or # your Python isn't set up right. # If cgi0.sh works but cgi1.py doesn't, check the #! line and the file # permissions. The docs for the cgi.py module have debugging tips. print("Content-type: text/html") print() print("<h1>Hello world</h1>") print("<p>This is cgi1.py") That wasn't working, so I changed #!/usr/local/bin/python to #!/usr/bin/python which is what which python tells me but the results were the same. Like I said, I'm ending up with a blank page. No errors that I know of, unless I'm checking the wrong error log (I'm checking the Apache error log). I'm on a MediaTemple (dv) running CentOS.

    Read the article

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Batch Script to Find Certain words and delete those lines in a file

    - by SuperUserMan
    EDITED THE QUESTION as regarding type of solutions I am on Windows & some suggested SED etc. So i am OK with these 3rd party standalone exe's using command line Say i have following lines in abc.txt file "@yuy007 what are you doing friend #disneyrocks" "STFU, i dont care what you think @happy55" "@social88 @gg99 ok mate see you at the subway :)" "btw arnold was great in that movie @tt11 @gg11 #disneyrocks" "we are going to disney. Do you want to? #disneyrocks" "We dont like disney.#disneyrocks we are not going" ".@socialguy what are you upto #disneyrocks " I need to employ 5 filters with above file to get def.txt Delete all lines which start with @ character, like 1st and 3rd Delete all lines which start with .@ characters, like 7th Delete all lines which don't have any word starting with # like 2nd and 3rd In leftover lines, Delete all words starting with @ character (keeping the lines intact) like words @happy55 in 2nd , @social99 & @gg99 in 3rd, etc. In this case we still need to preserve quotes " at start and end of line Delete all the blank lines left after above lines are removed EDIT if i have following line , it wrongly deletes the content after @word's "btw arnold was great in that movie @tt101 @gb1997 #whatthehell" is edited to "btw arnold was great in that movie" Thanks

    Read the article

  • Install self-signed certificate on local server (iis)

    - by ile
    On this page there are instructions on how to create self-signed cert (on apache) and how to install this certificate on server. I found this page (http://www.visualwin.com/SelfSSL/) with instructions on how to create self-signed certificate on windows (iis). I followed instructions and when I type https://myip/myapp (this leads to localhost because I set my router's port forwarding to go to localhost on my pc) this part works. From the first link, the most important part is this: What needs to be installed in IE is actually the Root CA Certificate. In the how-to above, the Root CA Certificate is called ca.crt. Copy this file to the server that is running QuickBooks. The following is for IE6: - Open IE - Tools - Internet Options - Content - Certificates - Trusted Root Certification Authorities Tab - Import, Next, Browse to 'ca.crt' - Next, Next, Finish, Close, OK The part that is missing in second link is that there is no instruction on how to get .crt file, so I tried to get it myself. What I did was following: I opened https://myip/myapp in Firefox and then "This Connection is Untrusted" screen appeared. Then I clicked on "Add Exception" and then below "Certificate Status" I clicked "View". Under the Details tab I clicked on Export and choosed Save as type: "X 509 Certificate (PEM)" and file was saved with .crt extension. Then I opened IE8 and followed above instructions. After opening https://myip/myapp in IE8 I always get warning screen. Does anyone knows what am I doing wrong? Thanks, Ile

    Read the article

  • Exchange 2010 Recovery: Mailbox not found using Restore-Mailbox

    - by user146665
    Exchange 2010 SP1 Update Rollup 5 server information store database was restored to a Recovery Database using EMC Networker successfully. The Recovery Database is in a mounted state with mailboxes listed within in it. However, when restoring the mailbox content using the following command: Restore-Mailbox –Identity MYMAILBOX –RecoveryDatabase MYRECOVERYDB –RecoveryMailbox LOSTMAILBOX –TargetFolder FOLDERFORLOSTMAILBOX Returns the following error: Mailbox "LOSTMAILBOX" doesn't exist on database "MYRECOVERYDB". + CategoryInfo : NotSpecified: (0:Int32) [Restore-Mailbox], ManagementObjectNotFoundException + FullyQualifiedErrorId : 66265C53,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox Note: I've used the correct alias name for the mailbox name; i've also tried combinations such as first name, or last name or both and so forth. Issuing a Get-MailboxStatistics -Database MYRECOVERYDB to see if the mailbox is there and it is as shown below: DisplayName ItemCount StorageLimitStatus LOSTMAILBOX 39495 MailboxDisabled Note: The StorageLimitStatus shows a strange output of MaibloxDisabled. Perhaps this may be the culprit. Going by the article's documentation I cannot complete the restore of the mailbox as I'm stuck at the restore-mailbox error that it cannot be found. Please advise & Thank you! Source of article: http://www.testlabs.se/blog/2012/07/05/exchange-2010-restore-to-recovery-database-using-emc-networker/

    Read the article

  • Extract archive here, autodetect subfolder

    - by lorancou
    I use 7-Zip under Windows 7 to extract all kind of archives. The ever-running problem is that there is not a single way to handle subfolders. Some people create their archives like this (let's call it type A): . +-- subfolder ¦   +-- file 1 ¦   +-- file 2 ¦   +-- ... "Extract Here" will extract everything to subfolder/, which is fine; "Extract to <Folder>" will create an extra subfolder level, which is annoying. And other people create them like that (let's call it type B): . +-- file1 +-- file 2 +-- ... "Extract Here" will populate the current folder, which can bring a lot of mess if this folder already contains tons of other files, and which is very annoying; "Extract to <Folder>" will create a subfolder and conveniently stuff it with the archive content, which is fine. So this leaves two options: always choose to "Extract to <Folder>" and live with the extra subfolder for type A archives. Or open the archive before extracting it to check if this is a type A or B, and then appropriately select "Extract Here" or "Extract to <Folder>". I usually do the latter, but I just discovered that KDE is offering a very handy Extract Archive Here, Autodetect Subfolder feature that automates that. It'll "Extract Here" if there's only one subfolder in the archive, otherwise it'll "Extract to <Folder>". I love this idea. Is there a Windows 7 tool offering this option? Preferably integrated in the explorer context menu? Maybe it's hidden somewhere in 7-Zip and I just didn't find it?

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • apache 2.4 redirect within virtualhost

    - by user129545
    I have a couple http (port 80) vhosts that I want to redirect to http if an https request is made to them. Apparently some things have changed with Apache 2.4 (NameVirtualHost not used like it was in the past, etc). Apache 2.4 on centos 5.5, This is all using a single ip for all vhosts below, I don't have multiple ip's on this box, my /usr/local/apache2/conf/extra/httpd-vhosts.conf : # <VirtualHost www.dom1.com:80> ServerName www.dom1.com ServerAlias dom1.com DocumentRoot /usr/local/apache2/htdocs/dom1/wordpress </VirtualHost> <VirtualHost webmail.dom2.com:443> ServerName webmail.dom2.com DocumentRoot /usr/local/apache2/htdocs/webmail SSLEngine On SSLCertificateFile /usr/local/apache2/webmail.crt SSLCertificateKeyFile /usr/local/apache2/webmail.key </VirtualHost> # my /usr/local/apache2/conf/extra/httpd-ssl.conf, # Listen 443 SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 Mutex default SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect builtin SSLCryptoDevice builtin # webmail.dom2.com works fine. Problem is I can connect to https://www.dom1.com, and it serves up the content from webmail.dom2.com. I want any https requests for www.dom1.com on port 443 to simply redirect to http://www.dom1.com on port 80. Thanks

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain? [migrated]

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

< Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >