Search Results

Search found 3603 results on 145 pages for 'andrew james watt'.

Page 37/145 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Win7 - Some pinned program's icons are corrupt, show default

    - by Andrew Backer
    I have both FF 3.6 & Chrome pinned to my taskbar in win7. The icons for these two programs show up as the ugly default icon from yesteryear. Strange. Is there some way to force an icon refresh for pinned programs? When I first added them they showed properly, but several days later they reverted to this state Un-pinning the program causes the icon to show up properly, and re-pinning it causes it to break again. These other proggies show up fine: Media Center, Media Player Classic HC, Hulu Desktop, WMP, and the folders. I have 2 user accounts on this box, and both are showing this behavior. I have tried changing the taskbar icon size to 'small' and back, but to no effect. Edit (Add) The icons show up as broken in the start menu too, but I can navigate to the EXE directly. When I click "change icon" in the properties for the start menu entry I get the error : Can not find %ProgramFiles%\Google\Chrome...\chrome.exe.

    Read the article

  • Windows Server 2012 Metro shortcut icons do not show for other users

    - by Andrew
    I have installed SQL Server 2012 and SharePoint 2013 on my Windows Server 2012 machine using a dedicated domain install account. When I log into the same machine with a user account, all the icons for these applications are missing! I can still access the applications by finding them in 'Program Files', however it is very annoying. (For example, I'm not exactly sure where the SharePoint PowerShell is located, and frankly I don't want to know either) In previous versions of Windows Server, the Icons always showed up in the Start Menu. Does anyone know how I can copy the shortcuts in one account to another?

    Read the article

  • Apache 2.2 on Mountain Lion ignoring ProxyPass and sending request to DocumentRoot

    - by James H
    I have sickbeard running at 127.0.0.1:8081/sickbeard ProxyRequests Off ProxyPass /sickbeard http://127.0.0.1:8081/sickbeard ProxyPassReverse /sickbeard http://127.0.0.1:8081/sickbeard in httpd.conf And yet when I try and access http://example.com/sickbeard/ it gives me a 404, with this in the error log. File does not exist: /Library/Server/Web/Data/Sites/Default/sickbeard Which I think means it's ignoring the ProxyPass and ProxyPassReverse directives? Anyone know why this may be? For what it's worth, this setup used to work under Lion. I have the following modules loaded: LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so Thanks for your time!

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • Lynx web browser usage

    - by Andrew
    Does anyone still use the Lynx text-only web browser? It would seem useful for certain classes of low-end mobile devices, especially if one is billed per KB of data transfer.

    Read the article

  • Can I setup NAT for the same service, two public IPs on different routers to the same private IP?

    - by James
    This might be needlessly complex, but here goes. I've got two Firebox x550e devices. The first has a local IP of 10.0.0.1, public IP 64.x.x.x. The second has a local IP of 10.0.0.10, public IP 70.x.x.x. There is an FTP server on our LAN with a private IP of 10.0.0.55. I've set up NAT rules in each of the Fireboxes, on the first firebox it is 64.x.x.x-10.0.0.55 tcp 21, on the second 70.x.x.x-10.0.0.55 tcp 21. The first rule works fine. I can ftp to 64.x.x.x and everything's good. The second rule doesn't work. ftp to 70.x.x.x results in a connection timeout. The second firebox logs say the connection is being allowed through. The default gateway on the FTP server is 10.0.0.1 (the first firebox) If I change the default gateway on the server to 10.0.0.10, the rule on the second firebox starts working, but the rule from the first firebox stops. Is there some way to make this work for both rules?

    Read the article

  • How to establish the real-time communication between Shopping cart running MySQL and Internal System Running PostgreSQL [closed]

    - by Andrew
    I am thinking about the way of establishing some-sort of real-time connection between MySQLpowered shopping cart and internal system that is running on PostgreSQL. Could you give me some sort of insight on this topic? For example, I can write some sort of csv export application, then enable remote MySQL for over the internet connection and then import csv to mysql directly from PC. Or upload csv and run cron on server. But this way of import-export causing delays; so I would like to link databased (or some msort). I have never done it before and would like to hear some opinions about this. Another way "just a thought" might to implement triggers that would initiate the update process via csv; but again, I would like to avoid csv. Do you have any good advise? Maybe some specific examples?

    Read the article

  • How do I configure permissions for a cluster share using Powershell on 2008?

    - by Andrew J. Brehm
    I have a cluster resource of type "file share" but when I try to configure the "security" parameter I get the following error (excerpt): Set-ClusterParameter : Parameter 'security' does not exist on the cluster object Using cluster.exe I get a better result, namely the usual nothing when the command worked. But when I check in Failover Cluster Manager the permissions have not changed. In Server 2003 the cluster.exe method worked. Any ideas? Update: Entire command and error. PS C:\> $resource=get-clusterresource testshare PS C:\> $resource Name State Group ResourceType ---- ----- ----- ------------ testshare Offline Test File Share PS C:\> $resource|set-clusterparameter security "domain\account,grant,f" Set-ClusterParameter : Parameter 'security' does not exist on the cluster object 'testshare'. If you are trying to upda te an existing parameter, please make sure the parameter name is specified correctly. You can check for the current par ameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to "| Get-ClusterParameter". If yo u are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET o bject received from the appropriate Get-Cluster* cmdlet to "| fl *". If you are trying to create a new unknown paramete r, please use -Create with this Set-ClusterParameter cmdlet. At line:1 char:31 + $resource|set-clusterparameter <<<< security "domain\account,grant,f" + CategoryInfo : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

    Read the article

  • Desktop search combined with Intranet search

    - by James S
    Hello, I'm looking for a software similiar to Windows Desktop Search or Google Desktop, that can also display results from our Intranet search engine in the same manner it displays regular results (files/emails/etc.). So far I managed to add Intranet search capabilities to Windows Desktop Search, but it doesn't show the results in the programs UI, but requires the user to press a "Search Intranet" button that opens the browser. Would be happy to hear any suggestions. Thank you.

    Read the article

  • Copying files within a Workgroup

    - by Andrew La Grange
    I have three boxes operating in a Windows Server workgroup within a closed network. (No Domain / No AD) There are several derivations of the scenario that I'm about to outline, but I'm sure I will be able to retool the solution as and when I need. Essentially the boxes are: 2 x Windows Server 2008 R2 x64 Standard 1 x Windows Server 2000 Standard I need to be able to schedule the copying/and-or/moving of files from various directories and each of the boxes. Each box has a different username and password for the administrator. I have PowerShell 2.0 on the two Win2K8 boxes (obviously). Previously I have used mapped network drives to copy the files, and cmd line batches, but I'd much rather use Powershell if possible (with Shares and/or $ notation). However the Copy-Item cmdlet doesn't seem to be processing the Credential correctly. Perhaps some Powershell gurus out there might be able to help me. Essentially I'd like to schedule a PS run of script to push backup files onto my WIn2k box (old fileserver) periodically.

    Read the article

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • Apache2 - rewrite a bunch of specified pathname URLs to one URL

    - by James Nine
    I need to rewrite a bunch of urls (about 100 or so) for SEO purposes, and there may be more being added in the future (probably another 50-100 later on). I need a flexible way of doing this and so far, the only way I can think of is to edit the .htaccess file using the rewrite engine. For example, I have a bunch of urls like this (please note that the query string is irrelevant, and dynamic; it could be anything. I was only using them purely as an example. I am only focusing on the pathname--the part between the hostname and query string, as marked in bold below): http://example.com/seo_term1?utm_source=google&utm_medium=cpc&utm_campaign=seo_term http://example.com/another_seo_term2?utm_source=facebook&utm_medium=cpc&utm_campaign=seo_term http://example.com/yet_another_seo_term3?utm_source=example_ad_network&utm_medium=cpc&utm_campaign=seo_term http://example.com/foobar_seo_term4 http://example.com/blah_seo_term5?test=1 etc... And they are all being rewritten to (for now): http://example.com/ What's the most efficient/effective way of doing this so that I may be able to add more terms in the future? One solution I came across is to do this (in the .htaccess file): RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ / [NC,QSA] However, the problem with this solution is that even invalid urls (such as http://example.com/blah) will be rewritten to http://example.com instead of giving a 404 code (which is what it is supposed to do anyway). I'm still trying to figure out how all this works, and the only way I can think of is to write 100 more RewriteCond statements (such as: RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR]) before the RewriteRule directive. For example: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR] RewriteCond %{REQUEST_URI} =/another_seo_term2 [NC,OR] RewriteCond %{REQUEST_URI} =/yet_another_seo_term3 [NC,OR] RewriteCond %{REQUEST_URI} =/foobar_seo_term4 [NC,OR] RewriteCond %{REQUEST_URI} =/blah_seo_term5 [NC] RewriteRule ^(.*)$ / [NC,QSA] But that doesn't sound very efficient to me. Is there a better way?

    Read the article

  • Inconsistent black levels in windows 7 media center

    - by James G
    I've got a HTPC running windows 7 64bit, hooked up to a Samsung LCD TV. My problem is different types of video are displaying different black levels on the TV. When I play a bluray through Arcsoft Total Media Theater I have to set the "HDMI Black Level" to "normal" in the TV picture options menu. When I play recorded TV through WMC I have to set it to "low" otherwise the black colors on the video are washed out and grey. Is there any way to configure the system so all videos are displayed with the same black level? The hdmi black level setting is deep in Samsung's menus so it's becoming a chore to keep switching it everytime I watch a different type of video. I'm using an ATI 4670 graphics card with HDMI output going straight to the TV. In the ATI catalyst control center I've got pixel format set to RGB 4:4:4 (Full RGB) since the TV wont allow me to change the HDMI black level if I choose one of the other settings.

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Can I get "disk utilization" from a NetApp filer via SNMP?

    - by Andrew
    On a NetApp filer's command line I'm running "sysstat -u" to show disk utilization, (actually the utilization of the single busiest disk). By disk utilization, I mean "percent of time the disk is busy", not "how much space on the disk is being used to store data/metadata". Is there a way to get disk utilization info through SNMP? The netapp.mib file doesn't appear to expose this. It does have CPU utilization, disk usage & capacity information, etc, but not disk utilization. The MIB-II (rfc1213) seems to be the only other information exposed by the filer through SNMP. I hope I am missing something. The "CP (consistency point) time" metric is exposed through the NETAPP-MIB in SNMP, but this seems to only partially correlate with disk utilization under write load, and not really at all under read load.

    Read the article

  • After reinstallation, Disk Cleanup disappears when I click OK.

    - by James
    After I reinstalled Windows 7, Disk Cleanup stopped working. I can start Disk Cleanup and select the drive to clean, but when I click on the OK button, the window disappears. Any solutions? Here's the data from Windows LogsApplication :- EventData 1744235005 1 APPCRASH Not available 0 cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 F:\Users\Jacob\AppData\Local\Temp\WER419.tmp.WERInternalMetadata.xml F:\Users\Jacob\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_cleanmgr.exe_6514b6ecb633f97cbf78e3a5bcae2c4bd74351_0d3b109c 0 75fa9599-41b1-11e0-b864-001966b2bcb6 0 The above one was with an Information icon. The one below was with an Error icon:-- EventData cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 bbc 01cbd5be36b572bf F:\Windows\system32\cleanmgr.exe F:\Program Files\Common Files\Microsoft Shared\OFFICE14\Csi.dll 75fa9599-41b1-11e0-b864-001966b2bcb6 I also used process explorer:- When i started disk cleanup, a cleanmgr.exe process appeared under explorer.exe.But when i clicked on the "OK" button after selecting the drive, cleanmgr.exe was there for some seconds before it disappeared. But a new process - WerFault.exe appeared under svchost.exe a few seconds after i clicked the "OK" button. It disappeared, too, from the process list after some time (i think it disappeared along with cleanmgr.exe).

    Read the article

  • What are "Excess Fragments" in defragmenting a hard drive?

    - by Andrew Swift
    I'm defragmenting my hard drive (XP SP3) with PerfectDisk 7.0, and it finds 816,659 excess fragments when I ask for an analysis. [update] Specifically, it shows that the 1TB disk is 14% fragmented with 19693 fragments and 816,659 excess fragments. About 20% of the disk is still free space. What does excess fragments refer to? What is the difference between fragments and excess fragments? I have had problems in the past where I defragmented a fragmented disk and many files were corrupted. It seemed as though "excess fragments" referred to orphan pieces, where the program couldn't find out where to put them. If that was true, then defragmenting a disk resulted in many incomplete files, and in fact I defragmented a disk full of MP3's and got a lot of corrupted files as a result. Instead, I started to simply format a separate disk and copy everything from one to the other. That way there were no orphan bits, and no file corruption. Does anybody know what "excess fragments" really are?

    Read the article

  • Extracting Windows executable installers on Mac Os X?

    - by James McMahon
    Is there a program or a script out there to extract the contents of a Windows installer under Mac Os X? Under Windows there is the Universal Extractor, I am looking like something like that for Mac. I don't know if there is a universal solution to this problem or if I would need a extractor specific to the type of installer. In my case I am actually trying to get the installer from Gog.com to extract so I can use them with Boxer.

    Read the article

  • How do I backup Credentials Manager passwords (Windows 7)

    - by Andrew J. Brehm
    I am trying to create a backup of my stored passwords in Credentials Manager. But after Windows switches to the secure desktop to get the password for the backup file it simply announced that "Your stored logon credentials could not be backed up" and gives as explanation "Element not found", neither of which is helpful. (In fact I hate the "X could not Y" type of error message). I am an administrator on the machine and there is only one password in Credentials Manager. The sole point of the backup is to create a nearly empty Credentials Manager so that I don't have to delete manually hundreds of password entries every time I have to change my domain password. (I think Microsoft haven't throught this through properly. There appears to be no way to delete more than one entry at a time.) Any ideas?

    Read the article

  • Is there a way to use an inline PNG image in an Outlook e-mail?

    - by James McMahon
    In my work as a developer I sometimes find myself sending details emails with screenshots to illustrate some point or problem. The content of these screenshots is often text. So knowing that PNG is much better at handle compression of images with text, I save my screenshots as PNG and insert them into my email. However whenever I check my sent mail, the images are clearly being sent as a JPG because they look horribly compressed. I'm using Outlook 2003 as my email program. Is there some setting I can change to make Outlook send inline images as PNGs?

    Read the article

  • Cannot find Power Management Tab in XP

    - by Andrew Heath
    I have the problem that when I send my computer to sleep it wakes if you bump the table, floor, burp etc. I have read many threads that say go to Device Manager Mouse Properties Power Management Tab and uncheck the box for wake. My problem is I do not have a Power Management Tab! Anyone know how to enable the tab or stop the mouse from waking my machine? And no, turning it upside down doesn't work either!

    Read the article

  • Remote logging for multiple Apache virtual hosts using syslog-ng

    - by James
    I'm running a couple Apache web servers that each have 4-8 separate virtual hosts on each of them. I'm trying to setup a dedicated log server that stores each virtual host access and errors logs in a separate directory for that virtual host. For example on the logging server, /var/log/remove/10.0.0.2/virtualhost1 contains access_log and error_log /var/log/remove/10.0.0.2/virtualhost2 contains access_log and error_log /var/log/remove/10.0.0.3/virtualhost3 contains access_log and error_log and so on... Right now I have it split up by host but I can't figure out how to do it additionally by virtual host. Here are the relevant lines from the logging server's syslog-ng.conf source r_src { tcp(ip("0.0.0.0") port(5140)); }; destination r_all { file("/opt/splunk/logs/$HOST"); }; log { source(r_src); destination(r_all); }; Any help would be appreciated. Thanks!

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >