Search Results

Search found 22356 results on 895 pages for 'var dump'.

Page 110/895 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • Getting the general log to work in MySQL 5.6.8

    - by Benjamin
    I can't get the general log to work in this version of MySQL. I added the following lines to /usr/my.cnf: general_log = 1 general_log_file = "/var/log/mysql.log" Then restarted the server: [root@localhost ~]# service mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! The settings seem to be taken into account: mysql> SHOW VARIABLES LIKE 'general_log%'; +------------------+--------------------+ | Variable_name | Value | +------------------+--------------------+ | general_log | ON | | general_log_file | /var/log/mysql.log | +------------------+--------------------+ 2 rows in set (0.01 sec) But the log is never created: [root@localhost ~]# mysqladmin flush-logs [root@localhost ~]# ls -al /var/log/mysql.log ls: cannot access /var/log/mysql.log: No such file or directory Any idea why?

    Read the article

  • How to connect with MySQL server if it won't connect via the socket?

    - by cwd
    I have an account on a shared server. I have jailshell access and also PhpMyAdmin. I want to run mysql commands via SSH but I'm getting an error: $ mysql -u mySqlUser -p mySqlPw Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' I can connect with PHP and phpMyAdmin, so would it be possible to call mysql from the shell and have it connect via an ip and port instead of the socket? The file /var/lib/mysql/mysql.sock does not exist - maybe that is intentional, and the only thing in /etc/my.cnf is [mysqld] skip-innodb More Info I don't have access to change system settings. I did a search in /var for mysql.sock but found nothing. However, phpMyAdmin might be connecting via a socket somehow: Really it would just be great if I could connect via IP. Also tried these two syntaxes: $ mysql -u mySqlUser -p mySqlPw -h localhost $ mysql -u mySqlUser -p mySqlPw -h localhost -P 3306 Both with the same result: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • mysql server overloading without error

    - by beny
    Hi, I have a serious problem on my server with MySQL server, it overload itself without any error in /var/log/mysqld What steps should I do to find out the problem ? my.cnf is [mysqld] set-variable=local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords=1 skip-bdb set-variable = innodb_buffer_pool_size=256M set-variable = innodb_additional_mem_pool_size=20M set-variable = innodb_log_file_size=128M set-variable = innodb_log_buffer_size=8M innodb_data_file_path = ibdata1:1000M:autoextend Please help, thx

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • including files in a symlink directory when backing up with duplicity

    - by Rob
    I'm backing up using Duplicity, great tool. I'm unable to include files in the backup that are within a directory that is a symlink. Using the following: duplicity <dup args> --include /var/www/**/current --exclude '**' duplicity will only backup the symlink I've tried: duplicity <dup args> --include /var/www/**/current/* --exclude '**' # and duplicity <dup args> --include /var/www/**/current/** --exclude '**' Not even then symlink is backed up. the "current" directory links to directory like: /var/www/host.com/de9f2c7fd25e1b3afad3e85a0bd17d9b100db4b3 The files contains a few static html & css files. I want those files to be backed up, regardless of which sha'd directory "current" points to. Any help appreciated.

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • What is a good partitioning design/scheme for a multi-boot *nix system?

    - by static
    I'm planning to install Debian on my server. I would like to design the partitioning scheme in such a way, that I could install one or more other *nix distributives on that. So, reading many articles I think this scheme could be a good one for the initial idea of multi-boot: /grub /swap /LVM VG1 (for OS1) -> /boot (LV1) / (LV2) /tmp (LV3) /var ... /var/log /home /LVM VG2 (for OS2) -> /boot / /tmp /var /var/log /home ... (other distros) /LVM VG0 (for data) -> /data (LV1) But I'm confused a little bit now: what should be the labels for these partitions (unique or not) and what should be the mounting points looking as (/home (OS1) mounted to /home as well as /home (OS2)...)?

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Lighttpd + django on gentoo 10 seconds to answer

    - by plaetzchen
    I want to run a Django site on a lighttpd with fastcgi on a gentoo machine. Everytime I try to access the site I get a response after more or less exactly 10 seconds. Im using a socket to let lighttpd communicate with my Django site, but a tcp port doesn't help either. Could this be a lighttpd problem? I tried to both from a server in the internet as well as from localost, this is what lighttpd gives me in the error.log 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : / 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : / 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.349) -- sanatising URI 2012-07-10 14:36:36: (response.c.350) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (mod_access.c.135) -- mod_access_uri_handler called 2012-07-10 14:36:36: (mod_fastcgi.c.3632) handling it in mod_fastcgi 2012-07-10 14:36:36: (response.c.470) -- before doc_root 2012-07-10 14:36:36: (response.c.471) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.472) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.473) Path : 2012-07-10 14:36:36: (response.c.521) -- after doc_root 2012-07-10 14:36:36: (response.c.522) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.523) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.524) Path : /var/www/owntube/owntube.fcgi 2012-07-10 14:36:36: (response.c.541) -- logical -> physical 2012-07-10 14:36:36: (response.c.542) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.543) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.544) Path : /var/www/owntube/owntube.fcgi

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • Apache - Restrict to IP not working.

    - by Probocop
    Hi, I've a subdomain that I only want to be accessible internally; I'm trying to achieve this in Apache by editing the VirtualHost block for that domain. Can anybody see where I'm going wrong? Note, my internal IP address here are 192.168.10.xxx. My code is as follows: <VirtualHost *:80> ServerName test.epiphanydev2.co.uk DocumentRoot /var/www/test ErrorLog /var/log/apache2/error_test_co_uk.log LogLevel warn CustomLog /var/log/apache2/access_test_co_uk.log combined <Directory /var/www/test> Order allow,deny Allow from 192.168.10.0/24 Allow from 127 </Directory> </VirtualHost> Thanks

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • Some SVN Repositories not working - 405

    - by Webnet
    I have 2 groups of repositories, web and engineering. I setup web about 3 months ago and it works great, I'm trying to move engineering over to this same SVN server and I'm getting a PROPFIND of /svn/engineering/main: 405 Method Not Allowed error when I try to do a checkout. I can checkout/commit for /svn/web just fine dav_svn.conf This is the only thing uncommented in this file.... <Location /svn/web> DAV svn SVNParentPath /var/svn-repos/web AuthType Basic AuthName "SVN Repository" AuthUserFile /etc/svn-auth-file Require valid-user </Location> <Location /svn/engineering> DAV svn SVNParentPath /var/svn-repos/engineering AuthType Basic AuthName "SVN Repository" AuthUserFile /etc/svn-auth-file Require valid-user </Location> /var/svn-repos/ drwxrwx--- 3 www-data subversion 4096 2010-06-11 11:57 engineering drwxrwx--- 5 www-data subversion 4096 2010-04-07 15:41 web /var/svn-repos/web - WORKING drwxrwx--- 7 www-data subversion 4096 2010-04-07 16:50 site1.com drwxrwx--- 7 www-data subversion 4096 2010-03-29 16:42 site2.com drwxrwx--- 7 www-data subversion 4096 2010-03-31 12:52 site3.com /var/svn-repos/engineering - NOT WORKING drwxrwx--- 6 www-data subversion 4096 2010-06-11 11:56 main I get to the bottom and now realize that there's a 6 on that last one not a 7..... what does that number mean?

    Read the article

  • Configure exim in debian 6

    - by blakcaps
    I am trying to configure exim with gmail in my debian 6 system as per this tutorial http://www.manu-j.com/blog/wordpress-exim4-ubuntu-gmail-smtp/75/.After configuring, When i run update-exim4.conf i am getting this message, Exim configuration error: two client authenticators (gmail_login and login) have the same public name (LOGIN) Invalid new configfile /var/lib/exim4/config.autogenerated.tmp, not installing /var/lib/exim4/config.autogenerated.tmp to /var/lib/exim4/config.autogenerated Any pointers to solve this?

    Read the article

  • How to fix SCRIPT_NAME with PHP-FPM and Apache's mod_fastcgi?

    - by Kyle MacFarlane
    I have the following in my Apache conf to get PHP-FPM working: FastCgiExternalServer /srv/www/fast-cgi-fake-handler -host 127.0.0.1:9000 AddHandler php-fastcgi .php AddType text/html .php Action php-fastcgi /var/www/cgi-bin Alias /var/www/cgi-bin /srv/www/fast-cgi-fake-handler DirectoryIndex index.php This works fine except that SCRIPT_NAME is always /var/www/cgi-bin and some scripts use SCRIPT_NAME to work out the location of the current script (vBulletin). Google has plenty of solutions for Nginx but not a word for Apache.

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

  • Can't get php+sqlite working

    - by facha
    Hi, everyone I'm struggling all morning to make php work with an sqlite database. Here is a piece of php code that I try to execute: #less /var/www/html/test.php <?php $db=new PDO("sqlite:/var/www/test.sql"); $sql = "insert into test (login,pass) values ('login','pass');"; $db->exec($sql); ?> Here is how I've done tests: # sqlite3 /var/www/test.sql sqlite> create table test (login varchar,pass varchar); #chown apache:apache /var/www/test.sql #chmod 644 /var/www/test.sql Here is the stuff that drives me mad: When I execute from command line: #php test.php everything goes well. Sql is being executed and I can see a new row appear in the database. When I execute the same script from a browser - sql is not being executed. I don't get a new row in the database. There are no errors in the apache log file. Please, help

    Read the article

  • returning null vs returning zero, which would be better?

    - by Dark Star1
    I inherited a project that I am managing and having to maintain pending the redevelopment of the code base. At the moment I am being tasked with adding little feature all over the place and have gotten into the habit of returning null instead of zero in parts of the code where I am working on. The problem is we have a client that is using this code and parts of code that require data from my implemented features recieve a null and dump the stack trace in UI. I would like to avoid this entirely from my input but without the nullPointer exceptions there's the potential that errors would be introduced into the client's data which may go un-noticed. Usually I would have come up with my own error notification system but I have never inherited a project before. so I am unsure whether to continue down this path. I still believe that the stack dump is preferable to un-noticed data corruption/inaccuracies.

    Read the article

  • Unable to view 2 local sites over network

    - by gentrobot
    I have 2 websites running on my local machine that I'd like to view from other machines on the same network. For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site1.com DocumentRoot /var/www/answers/app/webroot DirectoryIndex index.php <Directory "/var/www/answers/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> For /etc/apache2/sites-available/site1.com: <VirtualHost *:80> ServerName site2.com DocumentRoot /var/www/answers2/app/webroot DirectoryIndex index.php <Directory "/var/www/answers2/app/webroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have added 2 entries in the /etc/hosts file as: 127.0.0.1 site1.com 127.0.0.1 site2.com Now, when I point the browser on my machine to site1.com, it shows me the first site and pointing the browser to site2.com, it shows me the second site. However,when I type in the local IP of my machine in the browser, it always shows site2. How can I change it to switch between site1 and site2 ? Is there a way that I can view both the sites form another machine (esp. mobile devices over wireless network) ?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >