Search Results

Search found 137428 results on 5498 pages for 'using statement'.

Page 157/5498 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Having trouble redirecting frevvo using mod_proxy

    - by user38859
    This question is similar to this: http://serverfault.com/questions/102868/how-to-access-webservers-running-on-ports-blocked-on-companys-network Basically, I'm using confluence and a plugin called frevvo. Confluence sits on port 8080 while frevvo sits on port 8082. I want to redirect both of them to port 80 via Apache HTTP web server so that it doesn't get blocked by company proxies. I've been using the document on Atlassian that shows me how to run confluence behind Apache (I can't post a second URL due to being a newbie here) I've successfully redirected Confluence from port 8080 to port 80 so I can now access Confluence using www.example.com/confluence. Now I tried doing the same thing to frevvo with the following configurations: Apache httpd: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /confluence http://localhost:8080/confluence ProxyPassReverse /confluence http://localhost:8080/confluence <Location /confluence> Order allow,deny Allow from all </Location> ProxyPass /frevvo http://localhost:8082/ ProxyPassReverse /frevvo http://localhost:8082/ <Location /forms> Order allow,deny Allow from all </Location> And in server.xml for the frevvo Tomcat instance, I added the following within <Host> tag: <Context path=" " docBase="" debug="0" reloadable="false"> <!-- Logger is deprecated in Tomcat 5.5. Logging configuration for Confluence is specified in confluence/WEB-INF/classes/log4j.properties --> <Manager pathname="" /> </Context> The plugin, frevvo, when accessed through the browser using http://localhost:8082 usually redirect to http://localhost:8082/frevvo/web With the above configuration, when accessing www.example.com.au/frevvo redirects to www.example.com/frevvo/web/static/login - which doesn't work. I hope the above details is clear and appreciate anyone who could give us some insight.

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • ip-up does not trigger when using built-in cisco vpn on mac osx lion

    - by Yasser Sobhdel
    I am using Cisco VPN client over lion and I want to make the ip-up and ip-down work. There is no sign of any action taken when I connect or disconnect this VPN connection. I really doubt whether the syntax has been changed or even this kind if connection is triggering the ip-up. Logically, it must be set over ppp but when using the following codes and instructions on them, there is no sign of any output in the log file: http://www.macfreek.nl/mindmaster/Modify_PPTP_Routing_Table http://www.aidanfindlater.com/use-vpn-for-specific-sites-on-mac-os-x Going for error, which there is no sign of it, using the following page: http://hints.macworld.com/article.php?story=20060616150640529 I couldn't find the /var/log/ppp/vpnd.log log file. Also the files are given full permission 0755 or a+x or even 777 using the following command: sudo chmod a+x /etc/ppp/ip-up Any clue on how to debug this would be appreciated. I am totally confused, netstat -rn -f inet doesn't show the routes. Even when the routes are added manually, closing the VPN connection does not run the ip-down and the routes must be deleted manually.

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • gpg symmetric encryption using pipes

    - by Thomas
    I'm trying to generate keys to lock my drive (using DM-Crypt with LUKS) by pulling data from /dev/random and then encrypting that using GPG. In the guide I'm using, it suggests using the following command: dd if=/dev/random count=1 | gpg --symmetric -a >./[drive]_key.gpg If you do it without a pipe, and feed it a file, it will pop up an (n?)curses prompt for you to type in a password. However when I pipe in the data, it repeats the following message four times and sits there frozen: pinentry-curses: no LC_CTYPE known assuming UTF-8 It also says can't connect to '/root/.gnupg/S.gpg-agent': File or directory doesn't exist, however I am assuming that this doesn't have anything to do with it, since it shows up even when the input is from a file. So I guess my question boils down to this: is there a way to force gpg to accept the passphrase from the command line, or in some other way get this to work, or will I have to write the data from /dev/random to a temporary file, and then encrypt that file? (Which as far as I know should be alright due to the fact that I'm doing this on the LiveCD and haven't yet created the swap, so there should be no way for it to be written to disk.)

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • debugging connection to mysql from python script using MySQLdb

    - by timpone
    I am a python newbie and have a python 2.5 script that is using MySQLdb to connect on OS X 10.5.8. I haven't been able to succesfully connect to the database of interest with this. However, I am able to connect using php's mysqli and also via the mysql cli interface. I get the error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_development'@'localhost' (using password: YES)") On my linux box which has the same mysql perms, the script works fine logging in. On my OS X laptop, I am able to create a database named test_python which bypasses mysql authentication scheme. This makes me think that issues like 32bit / 64bit incompatabilities aren't occuring. If I turn on the query log, I get access denied: 100610 20:56:55 4 Connect Access denied for user 'arc_development'@'localhost' (using password: YES) I'm a little bit at a loss to what to do next. Is there any way I can specify in the general log or binary log to get the actual password set on the connection string? How about writing out from connections.py file the value (although not sure how I'd do that)? thanks

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • Diagnosing PC crashes (most often while using shared folders or torrents)

    - by Dyppl
    For the last few weeks my PC (pretty old P4 with WinXP SP3) has been crashing randomly. It just suddenly reboots instanteneously. It feels hardware-related but I wasn't able to determine wether it's software or hardware that causes it. I did notice a pattern though: it's more likely to crash when I copy a lot of files over network or have uTorrent running, but sometimes it crashes when I am not doing anything with it. Copying files from it over network causes it to crash in 1 to 10 minutes almost every time. Using torrents causes it to crash every 1-3 hours. With neither or that on it crashes every 24 hours or so. I ruled out the following probable causes: PSU (I bought a new one and turned off most of the drives so the power is sufficient 100%) Bad HDD or interface cable on my SATA disk from which I was originally copying the data over network (bought new SATA cable and later yanked out the HDD completely, PC still crashes without it) Video adapter (AGP slot is now empty, using the onboard VGA at the moment) Network adapter (removed it from PCI, using onboard LAN) CPU (I think: I changed the termopaste and it's temperature is below 50C) RAM (I think: I ran Memtest86 and it didn't show any errors) At the moment I only have only one system HDD and DVD drives, a mouse and a keyboard plugged in. The fact that it crashes most often when I use network extensively makes me think that maybe it's software related (I removed the network adapter from PCI and now am using an onboard one, so network hardware is unlikely to cause problems). I am now pondering system reinstall but it's not a pleasant solution so I decided to ask wether there are better ideas first. If someone can share a good diagnostic tool it would be great because I didn't find anything good. Thanks in advance, I hope that "help to diagnose" questions aren't entirely banned here. EDIT: Motherboard is actually ~4 years old as I replaced it back in 2007

    Read the article

  • "The name 'WithTable' does not exist in the current context" using Fluent NHibernate

    - by Byron Sommardahl
    Might be a really easy problem to fix, but it the solution is eluding me! I'm using Fluent NHibernate 1.0 RTM (and using NHibernate bins that ships with it). I'm trying to map my entities and cannot use the WithTable() method. It's not available in Intelligence and VS doesn't suggest any namespaces to reference. Here's my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using GeoCodeThe.Net.Domain.Entities; using FluentNHibernate.Mapping; namespace GeoCodeThe.Net.Domain.Mappings { class CategoryMap: ClassMap<ICategory> { public CategoryMap() { WithTable("Categories"); // <----- Compile error: The name 'WithTable' does not exist in the current context Id(x => x.Id); Map(x => x.Name); Map(x => x.Tags); } } My bin folder has: Antlr3.Runtime.dll Castle.Core.dll Castle.DynamicProxy2.dll FluentNHibernate.dll Iesi.Collections.dll log4net.dll NHibernate.ByteCode.Castle.dll NHibernate.dll FluentNHibernate.pdb Castle.Core.xml Castle.DynamicProxy2.xml FluentNHibernate.xml Iesi.Collections.xml log4net.xml NHibernate.ByteCode.Castle.xml NHibernate.xml Any clue what I'm missing? Let me know if you need any more clarification.

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Why does this sql statement keep saying it is a boolean and not a paramter? (php/Mysql)

    - by ggfan
    In this statement, I am trying to see if there if the latest posting in the database that has the exact same title, price, city, state, detail. If there is, then it would say to the user that the exact post has been already made; if not then insert the posting into the dbc. (This is one type of check so that users can't accidentally post twice. This may not be the best check, but this statement error is annoying me, so I want it to work :)) Why won't this sql work? I think it's not letting the title=$title and not getting the value in the $title... ERROR: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in postad.php on line 365 //there is a form that users fill out that has title, price, city, etc <form> blah blah </form> //if users click submit, then does all the checks and if all okay, insert to dbc if (isset($_POST['submit'])) { // Grab the pposting data from the POST and gets rid of any funny stuff $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $price = mysqli_real_escape_string($dbc, trim($_POST['price'])); $city = mysqli_real_escape_string($dbc, trim($_POST['city'])); $state = mysqli_real_escape_string($dbc, trim($_POST['state'])); $detail = mysqli_real_escape_string($dbc, trim($_POST['detail'])); if (!is_numeric($price) && !empty($price)) { echo "<p class='error'>The price can only be numbers. No special characters, etc</p>"; } //Error problem...won't let me set title=$title, detail=$detail, etc. //this statement after all the checks so that none of the variables are empty $query="Select * FROM posting WHERE user_id={$_SESSION['user_id']} AND title=$title AND price=$price AND city=$city AND state=$state AND detail=$detail"; $data = mysqli_query($dbc, $query); if(mysqli_num_rows($data)==1) { echo "You already posted this ad. Most likely caused by refreshing too many times."; } }

    Read the article

  • Why does this sql statement keep saying it is a boolean and not a parameter? (php/Mysql)

    - by ggfan
    In this statement, I am trying to see if there if the latest posting in the database that has the exact same title, price, city, state, detail. If there is, then it would say to the user that the exact post has been already made; if not then insert the posting into the dbc. (This is one type of check so that users can't accidentally post twice. This may not be the best check, but this statement error is annoying me, so I want it to work :)) Why won't this sql work? I think it's not letting the title=$title and not getting the value in the $title... ERROR: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in postad.php on line 365 //there is a form that users fill out that has title, price, city, etc <form> blah blah </form> //if users click submit, then does all the checks and if all okay, insert to dbc if (isset($_POST['submit'])) { // Grab the pposting data from the POST and gets rid of any funny stuff $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $price = mysqli_real_escape_string($dbc, trim($_POST['price'])); $city = mysqli_real_escape_string($dbc, trim($_POST['city'])); $state = mysqli_real_escape_string($dbc, trim($_POST['state'])); $detail = mysqli_real_escape_string($dbc, trim($_POST['detail'])); if (!is_numeric($price) && !empty($price)) { echo "<p class='error'>The price can only be numbers. No special characters, etc</p>"; } //Error problem...won't let me set title=$title, detail=$detail, etc. //this statement after all the checks so that none of the variables are empty $query="Select * FROM posting WHERE user_id={$_SESSION['user_id']} AND title=$title AND price=$price AND city=$city AND state=$state AND detail=$detail"; $data = mysqli_query($dbc, $query); if(mysqli_num_rows($data)==1) { echo "You already posted this ad. Most likely caused by refreshing too many times."; } }

    Read the article

  • Setting Proxy Server for IE 10 on Windows 8 using pac file and Group Policy

    - by Greg Bray
    We currently use group policy to configure a proxy server PAC file for Windows XP and Windows 7 computers on our network. We now are starting to get requests for Windows 8, but have noticed that our current GPO does not work for setting the proxy server on Windows 8 clients or server 2012. Is it possible to do this using a 2008 R2 domain controller or would we need to update our domain to a 2012 server? I found a reference to creating new GPO settings for "Internet Explorer 10 and 11" and vague references to using RSAT on Windows 8 to set IE 10 settings via preferences, but nothing that talks about using group policy to manage proxy settings.

    Read the article

  • Logging in user in Windows 2008 server using LogonUser fails on LogonType LOGON32_LOGON_SERVICE

    - by Ofiris
    I am using LogonUser function to logon an account to Windows 2008 R2 server on a domain with clusterring. When using LOGON32_LOGON_INTERACTIVE as LogonType, I successfully login. When using LOGON32_LOGON_SERVICE as LogonType, Login fails, EventViewer says: An account failed to log on. Logon Type: 5 Account For Which Logon Failed: Security ID: NULL SID Account Name: thename Account Domain: thedomain Logon ID: 0x1009371c Logon Type: 5 Failure Information: Failure Reason: The user has not been granted the requested logon type at this machine. Status: 0xc000015b Sub Status: 0x0 Was not sure if its for superuser or stackoverflow (calling LogonUser from C# code), but I guess its some Windows server issue*. EventID = 4625 Edit: Found that - 0xc000015b The user has not been granted the requested logon type (aka logon right) at this machine Edit: Should be serverfault question...

    Read the article

  • Allowing creation/modification of virtual aliases using web.config

    - by user25018
    Hi, I've been given a problem to fix, and I initially thought of .htaccess files, except for one thing, I quickly realized it's an IIS server. Is it possible to allow a webmaster the ability to modify the virtual directories using web.config files in the same way you can using .htaccess files? If so, any ideas on where I can find details on how this is done that I can communicate with the end client? We want to be able to do this without having to provide access to the IIS console to the webmaster. An example of the desired change is: http://FQDN/Careers/Careers.aspx?locale=en-ca&uid=Careers have http:FQDN/careers point to the above, but modified/added/removed by the end user using web.config

    Read the article

  • Uninstall Apache using make

    - by Jeff
    I installed httpd using make. But i cant find any remove/uninstall method in README or INSTALL files. Also, i searched the http://apache.org docs. There also there is no mention of uninstalling using make. Actually, i didnt pass the PREFIX value, and it got installed into /usr/local/apache/. Is it fine? Where would it be installed by sudo apt-get install apache2? Thanks, Jeff p.s.I am using Ubuntu 9.10

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • diagnostic multicast issue using wireshark

    - by Abruzzo Forte e Gentile
    I have a network that is setup for multicast traffic. My setup is the following -Machine A : a server generates multicast traffic. -Machine A : few clients subscribing to that multicast traffic -Machine B : few clients subscribing to that multicast traffic # Address I am using IP : 239.193.0.21 PORT: 20401 The clients in machine A , even if they join the group (I can see IGMP messages through wireshark), don't receive any data while (and this is the funny part) machine B,C and D receive everything. I sorted that issue by completely disabling Linux firewall. Before doing that, I enabled the multicast on the firwall ('reject all'). iptables -A INPUT -m addrtype --src-type MULTICAST -j ACCEPT My question is the following: what I can check in wireshark that can help me in spot such firewall issues in the futures? For TCP/IP I realize by using ping and looking at ICMP packets rejected. What I can check/monitor for multicast? I am using LInux/Red-Hat Enterprise 6.2

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >