Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 472/609 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • How to remap IPs visible from local machine to IPs visible from a machine I have SSH access to?

    - by gooli
    I'm so far out of my depth I don't even know what to google for. There's a server I can connect to via SSH. Via that server I can access other server on its subnet via SSH. What I want to do is be able to access the machines that server has access to directly. Say the server IP is 192.168.7.7 and is the only one in the 192.168.x.x range I have access to. I'd like to configure things in such a way that when I to access say 192.168.7.100 on my machine, the connection will go through an SSH tunnel I open to 192.168.7.7 and out to 192.168.7.100. I would like this to work for any port if at all possible. I know I can set an HTTP proxy and even a SOCKS proxy, but I'm wondering is there is a way to actually remap some of the IP my machine sees to IP only visible from the remote machine. What would this configuration be called? IS this NAT, VPN, IP2IP or something else? How can I set up this on a Windows client box that connects via SSH to a Linux box? Sounds to me like I need to set up some kind of filtering on the network driver or possibly a virtual NIC, but I'm not sure where to go next.

    Read the article

  • Monitoring / metric collection for system collectives that change a lot in time (a.k.a. cloud)

    - by Florin Andrei
    When your server fleet doesn't change a lot in time, like when you're using bare-metal hosting, classic monitoring and metric collection solutions (Nagios, Munin) work well. But if the number of systems varies a lot in time, and may in fact vary rapidly, classic software is more difficult to setup and use. E.g., trying to make Nagios (monitoring) keep up with a rapidly evolving cloud infrastructure can be cumbersome. Same for Munin (metric collection). It's not just the configuration, but the way the information is conveyed to the user, or displayed, is inadequate for the cloud. What are some possible alternatives that work well with the cloud? The goals are to collect and display metrics (analog to Munin), and generate alerts when certain metrics go out of bounds or when certain services are unavailable (analog to Nagios), and do everything in a cloud-friendly manner. Some cloud providers offer monitoring / metric collection as services, but not always, and if you use more than one provider you don't want to become too dependent of just one vendor. So provider-independent solutions are required. EDIT: I am asking this question in a general fashion - not limited to any given cloud infrastructure (like OpenStack), but in the general case of using arbitrary cloud providers.

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • DD-WRT (WRT54G) and (THOMSON TG782) how to put them togather?

    - by FeRtoll
    Ok so let me explain, i bought WRT54G and successfully installed DD-WRT v24-sp1 (07/26/08) mini-special - build 9994. That's all ok no problems with it all normal functioning. And just to add (i don't need wireless, wireless is turned off always) What i want: ISP's router (TG782) from it's INTERNET port(out) cable "which was before in my pc" is connected to WRT54G's INTERNET port and then from WRT54G LAN port 1 to my pc. The problem: How do i connect and setup all? I have tried many times on many different ways but cant get it to work IF THE CABLE FROM TG782 IS CONNECTED TO WRT54G ON INTERNET PORT. If i connect the TG782 to Lan port 1 on WRT54G and my pc to lan port 2 then all works fine after i setup gateway and all. But i want to connect TG782 to Internet port of WRT54G because i need "Access Restrictions" and this only goes through WAN right? please correct me if i am wrong. What i have tried: This is how i have tried to setup all. The TG782 router ip is 192.168.1.1 And WRT54G ip is 192.168.1.30 so in WRT54G control panel i have setup like this: ----WAN Connection Type---- Connection Type: Automatic Configuration - DHCP STP: Disabled ----Router IP---- Local IP Address: 192.168.1.30 Subnet Mask: 255.255.255.0 Gateway: 192.168.1.1 (the TG782) ----Network Address Server Settings (DHCP)---- DHCP Type: DHCP Server Start IP Address: 192.168.1.100 Maximum DHCP Users: 6 And this wont work i probably miss something more, if anyone can help i would be thankfull. Also i have to note that i have tried to set my network adapter on pc to use the gateway of WRT54G and ip 192.168.1.102 In short: i cant get it to work normal only as a switch! Thanks for any help! -------EDIT:------- Here is an image which maybe can help: http://img27.imageshack.us/img27/4227/allin1w.jpg

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Why is apache serving the default?

    - by Matt
    I keep adding more vhosts and enabling them but all the sites always do to the default vhost in sites-available here is what the default kind of looks like with me only changing the ip for security reasons <VirtualHost 167.889.88.88:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> and here is my other which i named some-site.net <VirtualHost *:80> ServerName some-site.net DocumentRoot "/var/www/vhosts/somesite.com/http/" <Directory "/var/www/vhosts/somesite.com/http/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> and it turned on my this command sudo a2ensite some-site.net Enabling site some-site.net. Run '/etc/init.d/apache2 reload' to activate new configuration! then i reloaded /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit the url some-site.net i get the index page that is for the default vhost...what am i doing wrong

    Read the article

  • DNS NAmeserver Aname and cname records

    - by David
    Hi - I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with fasthosts in the uk. When I accepted the domains I could not access the DNS settings and enquired with fasthosts as to why. The replied saying 'The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account.' When i log into the fasthost control panel now i can access the DNS controls but both domains have no A Record of Cname record set up. I am concerned that fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. i am worried i will lose mydomain1 to as teh dns changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. one thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on fasthosts for the aname record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • Virtualized Screen Resolution

    - by Jim R
    I have a 64 bit Ubuntu 9.10 workstation with two virtualized guest OSes using KVM/QEMU. Also both 64-bit. One is Fedora 12 the other is beta of Ubuntu 10.04. The problem is that I would like to use a larger size display that is configured by default. Both guest OSes have a maximum screen resolution of 1024x768. I would like to increase this to something like 1280x900 or 1440x900. The resolution of the host system is 1920x1080. This configuration appears to be a result of the installation detecting the resolution being reported by the virtual screen during installation. The only information I have found on the subject suggests modifying the xorg.conf file in the /etc/X11 directory. Neither guest system has this file. I tried creating one by hand in the Fedora system and managed to render it completely unusable. Not a big deal as this is recently installed and can be reinstalled easily. Is what I want to do possible? If so, how do I accomplish it?

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • Virtualized Screen Resolution

    - by Jim R
    I have a 64 bit Ubuntu 9.10 workstation with two virtualized guest OSes using KVM/QEMU. Also both 64-bit. One is Fedora 12 the other is beta of Ubuntu 10.04. The problem is that I would like to use a larger size display that is configured by default. Both guest OSes have a maximum screen resolution of 1024x768. I would like to increase this to something like 1280x900 or 1440x900. The resolution of the host system is 1920x1080. This configuration appears to be a result of the installation detecting the resolution being reported by the virtual screen during installation. The only information I have found on the subject suggests modifying the xorg.conf file in the /etc/X11 directory. Neither guest system has this file. I tried creating one by hand in the Fedora system and managed to render it completely unusable. Not a big deal as this is recently installed and can be reinstalled easily. Is what I want to do possible? If so, how do I accomplish it?

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • How can I tell SELinux to give vsftpd write access in a specific directory?

    - by Arcturus
    Hello. I've set up vsftpd on my Fedora 12 server, and I'd like to have the following configuration. Each user should have access to: his home directory (/home/USER); the web directory I created for him (/web/USER). To achieve this, I first configured vsftpd to chroot each user to his home directory. Then, I created /web/USER with the correct permissions, and used mount --bind /web/USER /home/USER/Web so that the user may have access to /web/USER through /home/USER/Web. I also turned on the SELinux boolean ftp_home_dir so that vsftpd is allowed to write in users' home directories. This works very well, except that when a user tries to upload or rename a file in /home/USER/Web, SELinux forbids it because the change must also be done to /web/USER, and SELinux doesn't give vsftpd permission to write anything to that directory. I know that I could solve the problem by turning on the SELinux boolean allow_ftpd_full_access, or ftpd_disable_trans. I also tried to use audit2allow to generate a policy, but what it does is generate a policy that gives ftpd write access to directories of type public_content_t; this is equivalent to turning on allow_ftpd_full_access, if I understood it correctly. I'd like to know if it's possible to configure SELinux to allow FTP write access to the specific directory /web/USER and its contents, instead of disabling SELinux's FTP controls entirely.

    Read the article

  • SuPHP custom php.ini doesn't get read

    - by Mathieu Dumoulin
    Took me about 4 hours to get a FastCGI + SuPHP running off Ubuntu 11.10 and i'm now happy that it works mighty fine except for ONE big problem. Custom php.ini's don't seem to load. I tried changing some options and then firing off a phpinfo() and nothing changes in the phpinfo() which leads me to think that there is definitely a problem with the loading of the configuration file. <IfModule mod_suphp.c> AddHandler x-httpd-php .php <Location /> SuPHP_AddHandler x-httpd-php </Location> suPHP_ConfigPath /home/mdumoulin/Documents/tests/tests suPHP_Engine on </IfModule> As you can see, i took great care in making sure i wasn't referencing the php.ini file itself but the directory of the vhost. In the php.ini located in "/home/mdumoulin/Documents/tests/tests/php.ini", you can find: [PHP] error_reporting = E_ALL & ~E_DEPRECATED & ~E_NOTICE display_errors = Off And the log in /var/log/suphp/suphp.log doesn't contain anything relevant, (only old errors that occured before this post while i was testing suphp... So i'm stumped there, dunno what more i can do! Anyone got an idea? EDIT: FINALY, got time to work on this, i disabled FCGI and only enabled SuPHP but after restarting i still see "Server API: CGI/FastCGI". Is this what i should be getting or not? I believe that it's normal i get CGI since SUPHP works with a CGI... But i'm not too sure anymore...

    Read the article

  • remove tasksel lamp-server

    - by RickyA
    I was tricked in running "sudo tasksel install lamp-server" on the wrong server (UBUNTU 10.4). Now I am stuck with a system where apache won't start because of a "Address already in use: make_sock: could not bind to address 0.0.0.0:80" error. I now want to remove this task, but documentation on that crappy tasksel says you cant use it to uninstall stuff (!!!???). My question is where can I see what packages it installed, and how can I get rid of a selection of them (apt-get?). I want to keep apache, but mysql, php and the other stuff can go... [edit] I managed to get rid of most of the lamp stack. (/var/logs/dpkg.log is usefull for recently installed packages). However it did something in a configuration somewhere, and now two apache intstances start at boottime. Killing the first one and starting a new one gets rid of the "could not bind at adress..." error. Does anyone know where the startup of the first one is configured?

    Read the article

  • Can I setup a test server and then transfer everything to a diff. production server?

    - by Justin
    Hello, I am going to be setting up a "real" server, but it's not being shipped for another week. I was planning on setting up most of the server's functionality using an extra workstation I have. I wanted to set-up Windows Server 2003 or 2008, IIS, Terminal Services, Firewall, and Antivirus on this regular machine. I'd also be installing software like Winzip and VMWare that'll be used on the server. I can't ghost the machine, as far as I've done in the past, because the motherboard/cpu/etc. will all be different. Is there any way to export all of the "server settings" or something like that so I can move everything from test to production? Is there any software out there that does something similar to this? Some things I'm going to have to wait on such as setting up the file server completely in its raid configuration, but I'd like to get the simple server stuff and network setup out of the way. Has anyone done this before? Do I need software, open-source or not, to do this? Or maybe there's a way to export all the server settings in some way? Thanks in advance! Justin

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • Public static ip for vagrant box

    - by Numbata
    I have server (Debian Squeeze) with 1 ethernet card and 2 public static IPs (188.120.245.4 and 188.120.244.5). What I want: Setup virtual box (Ubuntu) with access via static IP (188.120.244.5). What I was trying: config.vm.forward_port - good idea: setup interface "eth1:1" with 188.120.244.5 on host-machine, and add to Vagrant file "config.vm.forward_port = hmm..?" config.vm.network :hostonly, "188.120.244.5" - not working. Was created new interface on host-machine with ip "188.120.244.1". Of course 188.120.244.1 IP isn't mine and I can't access my server via this IP. config.vm.network :bridged - I'm confused how this works :) What I have now: Not working configuration. Debian-host-machine# cat Vagrantfile Vagrant::Config.run do |config| config.vm.define :gitlab do |box_config| box_config.vm.box = "ubuntu" box_config.vm.host_name = "ubuntu" box_config.vm.network :bridged box_config.vm.network :hostonly, "188.120.244.5", :auto_config => false end end Debian-host-machine# ifconfig eth1 Link encap:Ethernet HWaddr 00:15:17:69:71:bb inet addr:188.120.245.4 Bcast:188.120.247.255 Mask:255.255.248.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:188.120.244.1 Bcast:188.120.246.255 Mask:255.255.255.0 Ubuntu-virtual-machine# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:ee:8d:0c inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 08:00:27:45:71:87 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 How I can access virtual box via public static IP from network? I'm using Oracle VM VirtualBox Manager 4.1.18 and Vagrant version 1.0.3. Thanks in advance for your feedback.

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >