Search Results

Search found 29706 results on 1189 pages for 'app launcher'.

Page 522/1189 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by JD Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • How to install service packs onto Thin Apps and other questions

    - by Buzlightyear
    I have a VM that I use to do all my ThinApps, Once I have done my application and its built I copy just the bin folder to a server which has a share, I then create a shortcut to the program exe on users desktops. This all seems to work ok. Firstly is this correct? If so, how then do I apply a service pack to that application after the ThinApp VM has been reverted to a base image, I have corel draw which i have a service pack for but it wont intall it as its saying it cant find the package installed on the computer. I have Thinreg batch file so have included corel into that and the program appears in the program list. I have the sandbox redirected to the server share mentioned earlier into seperate user folders when the user logs in. Obvisouly once i have done a App I revert to a previous snapshop so my thinapp vm is back to base image, so the corel install is lost. Sorry if im missing something very simple. Is its a case if you have an exe/msp you have to reinstall the whole app again and start over, Thanks for any help.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Will Parallel-port dongle work on USB-to-Parallel Adapter?

    - by Gary M. Mugford
    We have a niche program running on a Win2K laptop that uses a security dongle connected to a parallel port for authentication. The laptop is getting creaky and I spent a frustrating night last night shopping various websites for a new laptop that had a parallel port. Seems I'm about three years late [G]. The question I have, is, if I buy a new(ish) laptop and use a USB-to-Parallel Port adapter, will the security dongle work? I know I'm not being specific about the app, but it's one most people wouldn't have heard of anyways. I've been guessing the answer to my question is no, since the app won't know to send a request out to the non-existent port. But, if the process actually is that the dongle sends a message INTO the computer every now and then, then it might work. And, I'm not sure whether the dongle is only needed at program startup time or randomly. The dongle is a 'permanent' addition to the old laptop. This is all about the money. We can have a newly-updated version of the program (which won't add any features we need) for the princely sum of $2700. Or we can spend $500 on a refurbed laptop still running WinXP, add a 30 buck adapter and keep the same solid, stolid performance we've come to appreciate. But it all comes down to the dongle behaviour. Oh, and a dock won't work. The whole laptop issue is about moving about the various nooks and crannies of the building with laptop in hand. Thanks for any suggestions/guidance. GM

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • Why can't I unblock postgres with shorewall?

    - by ryeguy
    I can't seem to unblock the port needed for postgres using Shorewall. I am developing a PHP app on my windows machine here, and then I upload it on my linux box to actually use it. The linux box runs the php files as well as hosts the db server. Since I need it working from both machines, in my PHP code I am referring to the database as the full IP instead of localhost. I can easily connect to postgres from my windows machine, but ironically, my PHP app can't connect to postgres even though it's on the same box. Here's what I have in /etc/shorewall/rules: #macro/action src dest PostgreSQL/ACCEPT net $FW PostgreSQL/ACCEPT loc $FW PostgreSQL/ACCEPT loc dmz PostgreSQL/ACCEPT net dmz PostgreSQL/ACCEPT loc net PostgreSQL/ACCEPT dmz $FW PostgreSQL/ACCEPT dmz loc PostgreSQL/ACCEPT dmz net PostgreSQL/ACCEPT dmz dmz Clearly I have a ton of crap there. The first line is all I needed to make it allow a connection from my windows machine. All the lines after it are me just trying everything to get it to work. What am I missing?

    Read the article

  • Mod_rewrite (CakePHP routing functionality) forbidden after Snow Leopard upgrade

    - by Ryan Ballantyne
    Hello ServerFault, I am using the standard Apple-provided installations of PHP 5.3 and Apache 2 to do web development on a Mac Pro that I just upgraded to Mac OS X 10.6 (Snow Leopard). The upgrade went well enough, if I ignore the fact that it destroyed my ability to get work done. ;) After the update, the CakePHP application I was developing started giving me 403 Forbidden errors when accessed. Based on the errors in the log file, I've determined that Apache is choking on the mod_rewrite rules in Cake's .htaccess file. Here's the file, in its entirety: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> It's not that the rules themselves are wrong, but that Apache is forbidding the use of mod_rewrite altogether. All other pages on the machine work fine, and the 403 errors go away if I comment out the .htaccess file (but nothing works, of course). In my httpd.conf file, I've tried changing this: <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> To this: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> ...which has no effect. I don't know much about Apache configuration files, and I'm quite stuck on this. In fact, I know little enough that I'm not sure which information about my setup is needed to enable people to provide useful answers. I'm just using the vanilla OS X setup, nothing fancy. Googling has yielded no fruits for me this time, so I'm turning to you. Any ideas?

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • Stop Picasa (Mac) from scanning my harddrive

    - by Bodyscanner
    I want to use Picasa desktop app instead of the tedious & clunky web interface to share a couple of photos. Every time I launch Picasa it proceeds to open an annoying pop-up/tool-tip which flicks through every file on my HD using <=95% of CPU. I don't want this so I click the X. It appears again. I try to drag it somewhere less annoying onscreen but it pings back. I look in prefs for an option to turn it off. I give up and quit app until a new build comes out, which I download and repeat the above. WTF?! I understand Google can't be as cool as Apple - iPhoto isn't perfect by any means but at least it looks nice and 'just works'. I want to launch Picasa, not have it go through everything, not have 1000's of random pics and HD cruft on display in the list, and then perhaps drag in a few photos and upload them. Any idea of if that is possible? </rant>

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • Unzipping archives, preserving folder hierarchy

    - by Hydrangea
    I've got a problem and am not sure what it is, but hope someone can help me think this through because this has me stumped. Backstory: I wrote a Java app (Android) that unzips some zip files downloaded from the network. Until now, this was working great. Then, this week, the archives that I'm creating on my pc (in Ubuntu 12.04) unzip on the Android phone into a flat hierarchy instead of preserving the folders. I'm creating the archives the same way (right-click on folder compress) but even though my old archives (created in 10.04) still unzip as expected, the new ones don't. On Ubuntu, the new zip files look the same to me as the old ones. When unzipped on my pc the folders in these new archives are restored the same as the old ones... it's the Android app that extracts the old ones fine and the new ones flat. What I really want to know, though, is what the difference between the archives is. Question: How could one determine why one zip archive would be extracted with folder hierarchy preserved, when an identical one (to all appearances on Ubuntu 12.04) is extracted with no hierarchy? Are there different ways in which a .zip file can "have" folders, but Ubuntu doesn't distinguish between them?

    Read the article

  • How do I get the F1-F12 keys to switch screens in gnu screen in cygwin when connecting via SSH?

    - by Mikey
    I'm connecting to a desktop running cygwin via SSH from the terminal app in Mac OS X. I have already started screen on the cygwin side and can connect to it over the SSH session. Furthermore, I have the following in the .screenrc file: bindkey -k k1 select 1 # F1 = screen 1 bindkey -k k2 select 2 # F2 = screen 2 bindkey -k k3 select 3 # F3 = screen 3 bindkey -k k4 select 4 # F4 = screen 4 bindkey -k k5 select 5 # F5 = screen 5 bindkey -k k6 select 6 # F6 = screen 6 bindkey -k k7 select 7 # F7 = screen 7 bindkey -k k8 select 8 # F8 = screen 8 bindkey -k k9 select 9 # F9 = screen 9 bindkey -k F1 prev # F11 = prev bindkey -k F2 next # F12 = next However, when I start multiple windows in screen and attempt to switch between them via the function keys, all I get is a beep. I have tried various settings for $TERM (e.g. ansi, cygwin, xterm-color, vt100) and they don't really seem to affect anything. I have verified that the terminal app is in fact sending the escape sequence for the function key that I'm expecting and that my bash shell (running inside screen) is receiving it. For example, for F1, it sends the following (hexdump is a perl script I wrote that takes STDIN in binmode and outputs it as a hexadecimal/ascii dump): % hexdump [press F1 and then hit ^D to terminate input] 00000000: 1b4f50 .OP If things were working correctly, I don't think bash should receive the escape sequence because screen should have caught it and turned it into a command. How do I get the function keys to work?

    Read the article

  • how to properly set environment variables

    - by avorum
    I've recently started using Windows (having used Ubuntu up until now) and I find myself unable to properly set environment variables. Whenever I set them they don't seem to work. I've been going to Start-Edit Environment Variables for your Account and editing the PATH value in the upper half of the GUI. Here's what I've got so far. ;C:\Chocolatey\bin;C:\tools\mysql\current\bin;C:\Program Files (x86)\Git\bin;C:\Program Files\MySQL\MySQL Server 5.6\bin\;C:\Python33\Scripts; These are each the parent directories of the executables I'd like to be able to run by name from CMD, but mysql, git, and pip aren't being recognized. Am I doing something wrong syntactically or at a general understanding level? I'd like to be able to run these commands without having to specify the full path to the executables every time. EDIT: The full PATH extracted from CMD PATH=C:\Python33\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\GTK2-Runtime\bin;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Java\jre7\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\;C:\Program Files (x86)\MySQL\MySQL Utilities 1.3.4\; ;C:\Chocolatey\bin;C:\tools\mysql\current\bin I'm being forced to use Windows by my work environment, I don't enjoy the state of affairs.

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Skydrive for desktop not showing icon in notification area

    - by jeruntime
    is this normal behavior? I'm concerned that some files will not be transferred. For example. I need to be sure that its safe to put my laptop to sleep so that later when I'm on my desktop I wont have to open up my laptop and sync again if I closed it too early. It doesn't even appear when I THINK it is syncing. Thanks for any help in advance. edit 1: Yeah I know the app isn't required or even offered anymore, but what I was wondering was if they had some way to tell if it's syncing or not. The old desktop app had an icon in the notification area which had a animation for when it was syncing, but now there isn't anything there. Although when I try to go in and make it appear it says the application isn't active and the icon will show the next time it is. Its version is 17.0.2015.0811 Is it possible to force the icon to show in the notification area if it isn't designed to show up there at all? I basically just want the same functionality of Skydrive on windows 8.

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • One server running Django (with Nginx and Apache) and Wordpress Blog

    - by JCWong
    I have nginx listening to port 80 for my primary site foo.com. It proxys to port 8080 which is where the Django app lives server { listen 80; server_name www.foo.com foo.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } location /static/ { root /home/jeffrey/www/ddt/; } location /public/ { root /home/jeffrey/www/ddt/; } } I'd like to have a wordpress blog run on the same server. Apache is listening to port 8080 with this http.conf file NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> I added my Wordpress site using a virtualhost <VirtualHost *:8080> ServerName www.bar.com ServerAlias bar.com DocumentRoot /home/jeffrey/www/jeffrey_wp </VirtualHost> When I go to bar.com I still see my django app. Is it possible for these two sites to run on the same server?

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • web application or wep portal

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • Why doesn't the Windows 7 volume mixer remember per-application levels for all applications?

    - by mdives
    If I have the device's master level set to 50, and then lower an application level to 25; Once I close that application and reopen it, the volume levels should persist. The master level should remain at 50 and the application's at 25. This does happen for most applications. However, for one in particular, Napster, it does not. I subscribe to Napster's streaming service. I use the Napster desktop application to connect to that service. Every time that I open the Napster app, I have to adjust the application's volume level down in the volume mixer. When I open the app again after closing it, I have to do the same thing, the volume mixer is not remembering the set level. In fact. The level is reset back to 50, the same level as the device's master level. Has anyone else experienced this, with Napster or any other application? Is there a solution or is this a known issue?

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • Securely executing system commands as sudo from PHP

    - by Aydin Hassan
    Is it possible? I have written a command line tool in PHP for creating new environments for our company. It creates system users, directories, databases, VHosts and restarts apache, amongst other things. These commands require sudo privileges. I thought it might be a nice idea to have a web-interface for it, to make it easier for other non-developers to use. The web app would be behind authentication. When running from the command line I just run sudo tool.php, obviously I can't do this from a web app. How could I do this securely? Giving the apache user sudo access seems silly, as this would means all sites hosted on the box (eg all our environments) would have sudo access. Is it possible to make this tool run under a different user? this user could have sudo privileges for only the commands I need? How do things like plesk and cPanel do this? Any thoughts?

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >