Search Results

Search found 17188 results on 688 pages for 'browser plugins'.

Page 578/688 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • USPTO site asks for Quicktime Plug-in which I already have installed. Why?

    - by Kensai
    Whenever I try to watch the images of a patent in the USPTO site (example) using Firefox, the browser asks me to download the latest Quicktime, manually. This is totally strange because I already HAVE the latest plug-in (it even appears on my Firefox add-ons list). In the past I have only been able to see patent images using Safari. But never with Firefox. Is it a USPTO problem or a Mozilla one? Is there a way to fix the problem? edit: I can't see TIFF images neither with Internet Explorer (both 32-bit and 64-bit versions) nor with Chrome. All these browsers don't know how to open embedded TIFF images because they don't recognize the installed Quicktime plugin. A USPTO conspiracy to promote Safari? Come to think of it, I had this problem in my old computer as well. It had a 32-bit Vista OS, now I have 64-bit Windows 7. I hate TIFF and can't find Mozilla-specific information anywhere.. Arghh, am I the only one here with this freak problem?!

    Read the article

  • Problem with network after malware attack

    - by Cruelio
    Im trying to help some friends with a Win XP machine. I got rid of the malware using Malware Bytes, and HiJackThis. But now they(I) have another problem. When the computer boot into Windows it seems fine. When I start Internet Explorer the browser window opens just fine, but nothing happens for at minute or two. After the two minutes of waiting, the network icon appears in the taskbar next to the clock, and then everything works. The computer is connected to the internet using a Ethernet adapter. I have looked at the Rvent Log and found an error from Perfnet with eventid 2004 <Provider Name="PerfNet" /> <EventID Qualifiers="49152">2004</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> What I have tried so far: In the device manager i have uninstalled the Ethernet adapter and installed it again. I have uninstalled and installed the Windows File and Printer Sharing service. I have verified that both server and workstation services are started. What should I do next?

    Read the article

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • Are there any Graphical PowerShell tools?

    - by Dai
    As a developer for the .NET platform, I like to "explore" a platform, framework or API by browsing through the API documentation which explains what everything is - everything is covered and when I use tools like Reflector or Object Browser then I get to know for certain what I'm working with. When I'm writing my own software I can use tools like the Object Test Bench to explore and work with my classes directly. I'm looking for something similar, but for PowerShell - and ones that avoid text-mode. PowerShell is nice, and there are a lot of cool "discoverability"-things it has, such as the "Verb-Noun" syntax, however when I'm working with Exchange Server, for example, I wanted to get a list of AD Permissions on a Receive Connector and I got this list: [PS] C:\Windows\system32>Get-ADPermission "Client SVR6" -User "NT AUTHORITY\Authenticated Users" | fl User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : True AccessRights : {ReadProperty} IsInherited : True Properties : {ms-Exch-Availability-User-Password} ChildObjectTypes : InheritedObjectType : ms-Exch-Availability-Address-Space InheritanceType : Descendents [PS] C:\Windows\system32> Note how the first few entries contain identical text - there's no way to tell them apart easily. But if there was a GUI presumably it would let me drill-down into the differences better. Are there any tools that do this?

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Write permissions on uploaded files - PHP & Linux

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • port forwarding problem

    - by Claudiu
    I want to set up an svn server on my computer, so it's available from anywhere. I think I set up the repository correctly, using CollabSVN. If I go to Repo-Browser with TortoiseSVN and point it to svn://localhost:3690, it shows the proper repository. The problem now is that I'm behind a router. My local IP is 192.168.1.45 . Doing svn://192.168.1.45:3690 also works. My global IP is, say, x.x.x.x. Just doing svn://x.x.x.x:3690 doesn't work, which makes sense, since I have to set up port forwarding. I'm using a Verizon router. Using their web interface (on 192.168.1.1) I added the following port forwarding rule: IP Address forward to: 192.168.1.45 Source Ports: Any Dest Ports: 3690 Forward to: 3690 Protocol: TCP However, even after applying this rule, going to svn://x.x.x.x:3690 doesn't work. It takes a few seconds to fail, then says that the connection couldn't be established because the server connected to didn't respond properly after a period of time. What's interesting is that a random port, like svn://x.x.x.x:36904 fails immediately, saying that the target machine actively refused the connection. So I figure that the forwarding rule did something, but not fully what was necessary. Any ideas on how to get this working? The router model is MI424-WR and the firmware version is 4.0.16.1.56.0.10.12.3. UPDATE: I also tried setting destination port to 45000, and still forwarding to 3690, in case something was wrong w/ the lower-numbered ports, but to no avail. I also tried port 80 to port 3690, still all in vain.

    Read the article

  • Why is crontab giving "No such file or directory" error when the file DOES exist?

    - by fettereddingoskidney
    I am getting the following three lines in an error message in /var/mail/username after the following job runs in crontab... 15 * * * * /Applications/MAMP/htdocs/iconimageryidx/includes/insertPropertyRESI.php Errors: /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 1: ?php: No such file or directory /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: syntax error near unexpected token `'initialize.php'' /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: `require_once('initialize.php'); The PHP script I am trying to execute DOES in fact exist, and I have made absolutely sure the spelling is correct several times. I ran a crontab on another script before and it worked just fine...any ideas?? The 2nd & 3rd Errors are from line 3 in the following script (the one I am trying to run with the crontab): <?php require_once('initialize.php'); require_once('insertPropertyTypes.php'); $sDate; if(isset($_GET['startDate'])) { $sDate = $_GET['startDate']; } else { $sDate = ''; } $insertResi = new InsertPropertyTypes('Listing', $sDate, 'RESI'); ?> When I run my script insertPropertyRESI.php in the browser, it runs just fine???? Also, initialize.php and insertPropertyTypes.php are in the same directory as insertPropertyRESI.php I am using MAMP with PHP 5.3.5 thakns for the help :?

    Read the article

  • How can I copy this quote from PDF?

    - by isme
    I'm reading a PDF copy of Jerome H. Friedman's paper "Data Mining and Statistics: What's the Connection?" using Google Chrome and the Adobe Reader plugin. It contains an amusing quote that I want to copy and paste to my blog. I used the mouse to select the text of the quote and pressed CTRL + C to copy the text. The document looks like this: When I paste the text into Notepad, Stack Overflow, or anywhere else, the product is Wingdings-like gibberish: ????????????????????????|?????????|????? ?????|??????????????????????????????????????????????????|?????????????? ??????????????P????? ?????????????????????P?|?????????|?????????????????????????????????????????????????????? ????????????Þ?????????????????????????????????|???|??????????????????????????? The text should instead look like this: A difference between statisticians and computer scientists in this field seems to be that when a statistician has an idea he or she writes a paper; a computer scientist starts a company. I had to type that text out manually. This is feasible for such a small quote, but how do I actually copy what I see? Is it something unusual about the PDF, the browser, the plugin, or some combiniation of the three?

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • PHP sessions currupt

    - by Baversjo
    Using symfony framework 1.4 I have created a website. I'm using sfguard for authentication. Now, this is working great on WAMP (windows). I can login to several accounts on different browsers and use the website. I have ubuntu server 9.10 running apache (everything up to date and default configuration). On my server, when I login to the website in one browser it works great. When I on my other computer login with another user account on the public website, the login is successful. But when I refresh/go to another page the first user is shown as logged in instead! Also, when I press logout, It's not showing that I'm logged out after page load. When I press f5 again I'm logged out. As mentioned, all this works as expected on my local installation. I'm thinking there something wrong with my PHP session configuration on my ubuntu server, but I've never touched it.. Please help me. This is a school project and I'm presenting it today :(

    Read the article

  • Reconfiguring PHP with OpenSSL Extension on CentOS

    - by Evan
    Hi Guys - Long time browser, first time poster! I have a CentOS Dedicated server running just fine. I'm trying to reconfigure PHP to include the OpenSSL extensions so I can use some of the Youtube API's. I installed OpenSSL with yum, so it's in place on the server. I'm just now having trouble getting PHP to use it as an extension. I got the latest PHP tarball, untarred, set my configure string (./configure) using the proper parameter for openssl (--with-openssl=/usr) and it checked out just fine. I ran Make, then Make Install. I am getting hung up here. After it makes the PEAR config file it seems to quit. I guess I'm not sure, but it seems like there is a LOT more that should be happening. Here is a screenshot: http://www.evanfell.com/screencaps/6iamks.png Restarting apache shows no change to the PHP running on the server. Is there are PEAR issue killing the Install process? Or is there an other issue? Thanks In Advance. Happy to clarify and provide more info.

    Read the article

  • Why can't I connect to my router's config page with Windows 7?

    - by user17940
    I've got a Belkin wireless router, and just bought a new Dell computer with Windows 7 pre-installed. I can connect to the Internet and my home network just fine, but when I try to visit my router's configuration page at http://192.168.2.1, I get a "Connection was reset" error. Nothing I do will make the router's configuration page come up in my web browser. More background information: I could always get to the router's config page from my Windows XP machine. I never had any trouble prior to getting this Windows 7 computer. I can ping 192.168.2.1 successfully from my Windows 7 computer. My PC is connected to the router by a physical CAT5 cable, not via wireless. Every device connected to my router, including the new computer, can get to the Internet with no problem. Here are some things that did not solve the problem: I tried turning off IPV6 in Windows. I tried turning off my firewall and antivirus software I tried using https instead of http I tried disabling and then enabling the network connection in Windows I tried reverting my network card driver back to an older version I have tried both Firefox and Internet Explorer web browsers. Has anyone experienced something like this before, and solved it? Thanks a lot for your help!

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • How can I control which sound card Ubuntu uses for playback?

    - by GorillaSandwich
    I am dual-booting Ubuntu 9.04 and Windows XP but am new to Ubuntu. In Windows, I use an M-Audio Audiophile 2496 sound card for recording (because it has RCA input jacks for my mixer), but I don't use it for playback (because my speakers use a 1/8 inch jack); instead, I use the motherboard's built-in sound card. I tried to recreate this arrangement in Ubuntu, but despite selecting the built-in card for all playback under System > Preferences > Sound, I still have inconsistent results. Rhythmbox plays back through the integrated card, but Flash content in the browser and games in the OS send their audio to the Audiophile card. I have seen recommendations to use a program called "Jack" to control this, but I installed it and found it baffling. How can I control which card is used for playback, other than disabling one card (as I discovered how to do and explain below)? Also, is there a GUI for disabling hardware, or is it necessary to edit a configuration file?

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Why is Outlook 2007 resizing images in outgoing and incoming HTML email? How can I fix this?

    - by Mikuso
    In my Outlook 2007 client, embedded images in incoming emails appear resized when the message is viewed. The incoming images are resized to 198px wide, despite the original size. If it was larger, it will be shrunk; if it was smaller, it will be enlarged; the aspect ratio remains the same (the image is not stretched). This is local to my client only (i.e. another Outlook 2007 client reading the same IMAP inbox will see the image in the correct size. Similarly, viewing the email message in a browser will display the correct size). This happens regardless of whether width/height attributes are set on the image tag in the HTML. Zoom is set to 100% in my message window; text and other elements are not distorted from the original. In addition to this, outgoing HTML messages with images embedded in the same way are resized as they are sent. The outgoing images are all scaled to have a width of 247px. The source HTML of the outgoing message is changed after pressing Send so that the tag's width attribute is 247 pixels and the image file itself is also resized. These problems only occur with HTML messages; Rich Text messages do not have the same problem. I have already tried reinstalling Outlook and have it fully patched up to date. Why is this happening and how can I stop this?

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • Why does Windows Firewall want to block Google Chrome today?

    - by hippietrail
    I've been using the same public Wi-Fi (staying in a guesthouse) for over a week now. But this morning for the first time I got this puzzling warning from Windows Firewall: Why does Windows Firewall want to block one of the world's most popular web browsers today after being fine with it for years, and being fine with it on this connection for a week? Could it hinge on the words, some features? If so could it be something like a rare or new feature of Chrome that uses a different HTTP port? And if so why doesn't the security alert tell me any more about it? Or could it be a known bug in Windows Firewall? Or perhaps a known virus etc attaching itself to Google Chrome? Or is there a chance it's related to "Other browser makers follow Google's lead, revoke rogue certificates"? I haven't restarted Chrome for days and have downloaded but not installed a Windows update from a few days ago. So I'm not sure what may have managed to change on my machine since yesterday.

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Monitor displays at 1024x768; scrolls screen for higher resolutions

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >