Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 308/792 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Problems using wxWidgets (wxMSW) within multiple DLL instances

    Preface I'm developing VST-plugins which are DLL-based software modules and loaded by VST-supporting host applications. To open a VST-plugin the host applications loads the VST-DLL and calls an appropriate function of the plugin while providing a native window handle, which the plugin can use to draw it's GUI. I managed to port my original VSTGUI code to the wxWidgets-Framework and now all my plugins run under wxMSW and wxMac but I still have problems under wxMSW to find a correct way to open and close the plugins and I am not sure if this is a wxMSW-only issue. Problem If I use any VST-host application I can open and close multiple instances of one of my VST-plugins without any problems. As soon as I open another of my VST-plugins besides my first VST-plugin and then close all instances of my first VST-plugin the application crashes after a short amount of time within the wxEventHandlerr::ProcessEvent function telling me that the wxTheApp object isn't valid any longer during execution of wxTheApp-FilterEvent (see below). So it seems to be that the wxTheApp objects was deleted after closing all instances of the first plugin and is no longer available for the second plugin. bool wxEvtHandler::ProcessEvent(wxEvent& event) { // allow the application to hook into event processing if ( wxTheApp ) { int rc = wxTheApp->FilterEvent(event); if ( rc != -1 ) { wxASSERT_MSG( rc == 1 || rc == 0, _T("unexpected wxApp::FilterEvent return value") ); return rc != 0; } //else: proceed normally } .... } Preconditions 1.) All my VST-plugins a dynamically linked against the C-Runtime and wxWidgets libraries. With regard to the wxWidgets forum this seemed to be the best way to run multiple instances of the software side by side. 2.) The DllMain of each VST-Plugin is defined as follows: // WXW #include "wx/app.h" #include "wx/defs.h" #include "wx/gdicmn.h" #include "wx/image.h" #ifdef __WXMSW__ #include <windows.h> #include "wx/msw/winundef.h" BOOL APIENTRY DllMain ( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: { wxInitialize(); ::wxInitAllImageHandlers(); break; } case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: wxUninitialize(); break; } return TRUE; } #endif // __WXMSW__ class Application : public wxApp {}; IMPLEMENT_APP_NO_MAIN(Application) Question How can I prevent this behavior respectively how can I properly handle the wxTheApp object if I have multiple instances of different VST-plugins (DLL-modules), which are dynamically linked against the C-Runtime and wxWidgets libraries? Best reagards, Steffen

    Read the article

  • django sending emails

    - by dotty
    Hay, i can't seem to send emails using send_mail(), and I'm not sure why. Here's my details settins.py EMAIL_HOST = 'localhost', EMAIL_PORT = 25 My view from django.core.mail import send_mail send_mail('Subject here', 'Here is the message.', '[email protected]', ['[email protected]'], fail_silently=False) This fails with the error getaddrinfo() argument 1 must be string or None Anyone have any ideas? I'm developing on OS X Leopard Heres the last traceback /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/smtplib.py in connect for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): ... ? Local vars Variable Value host ('localhost',) msg 'getaddrinfo returns an empty list' port 25 self <smtplib.SMTP instance at 0x153b1e8>

    Read the article

  • Hosting WCF services

    - by Jayesh
    Hi, I have been working on a Silverlight app that consumes a WCF service. [on Visual Studio] as a matter of simplicity I created a WCF service in the project itself [as-in I didnt host it in IIS, but let the build-in webdev server in VS do it for me] It works well, now I want to deploy it on IIS 7.0, can you tell me If i would need to host the service independently and then the remaining stuff or if I just publish the website, the service would be hosted too and the Silverlight client would be able to communicate with the service. Please help! Thanks

    Read the article

  • Which can handle a huge surge of queries: SQL Server 2008 Fulltext or Lucene

    - by Luke101
    I am creating a widget that will be installed on several websites and blogs. The widget will analyse the remote webpage title and content, then it will return relevent articles/links on my website. The amount of traffic we expect will be very huge roughly 500K queries a day and up from there. I need the queries to be returned very quickly, so I need the candidate to be high performance, similar to google adsense. The remote title can be from 5 to 50 words and the description we will use no more then 3000 words. Which of these two do you think can handle the load.

    Read the article

  • Action Mailer Confirmation Emails - Ruby on Rails...

    - by bgadoci
    I have successfully installed the Clearance Gem from ThoughtBot. Clearance sends a confirmation email upon a new sign_up and suggests adding: config.action_mailer.default_url_options = { :host => 'localhost:3000' } to your /environments/test.rb and development.rb. I have tried this and also config.action_mailer.default_url_options = { :host => '127.0.0.1', :port => 3000 } But can't seem to get rails to send the email. As I am new to both Ruby and Rails, I am wondering if there is some step/config that ThoughtBot is assuming I have already done to send emails. What am I doing wrong/missing? UPDATE: Just added notifier.rb class Notifier < ActionMailer::Base recipients recipient.email_address_with_name bcc ["[email protected]"] from "[email protected]" subject "New account information" body :account => recipient end end

    Read the article

  • VC7.1 C1204 internal compiler error

    - by Nathan Ernst
    I'm working on modifying Firaxis' Civilization 4 core game DLL. The host application is built using VC7, hence the constraint (source not provided for the host EXE). I've been working on rewriting a large chunk of the code (focusing on low-hanging performance issues & memory leaks). I recently ran into an internal compiler error when trying to mod the code to use an array class instead of dynamically allocated 2-d arrays, I was going to use matrices from the boost lib (Civ4 is already using boost, so why not?). Basically, the issue comes down to: if I include "boost/numeric/ublas/matrix.hpp", I run into an internal compiler error C1204. MSDN has this to say: MSDN C1204 KB has this to say: KB 883655 So, I'm curious, is it possible to solve this error without a KB/SP being applied and dramatically reducing the complexity of the code? Additionally, as VC7 is no longer "supported", does anyone have a valid (supported) link for a VC7 service pack?

    Read the article

  • How to use AIR 2.0 NativeProcess API with Java?

    - by dede
    How do you use this great new API in connection with Java? Do you use just pure native process API like nativeProcess.standardInput.write() and nativeProcess.standardOutput.read() with which you cannot debug Java side neither invoke remote java method. Or you are using some library that leverages remote method invocation such as flerry lib but that also cannot debug Java side? Or maybe you are using Merapi with which you can debug but cannot remotely invoke Java method? I'm asking this because this is maybe the most important question regarding this API and its ease of use.

    Read the article

  • jquery-ui from a bookmarklet

    - by Mala
    Hi I'm trying to create a bookmarklet which will use the jquery-ui - what steps must I take to do this? My bookmarklet loads a remote script and appends it to the page. This script includes jquery and jquery-ui, but getting the stylesheets and images is turning into a nightmare. Do I have to edit all the CSS to point images to their locations on the remote server? Effectively what I'm trying to do is have a bookmarklet which will pop up some information in a nice pretty draggable jquery-ui dialog. Is there an easier way? Thanks, Mala

    Read the article

  • code to send email

    - by Alexander
    What am I doing wrong here? private void SendMail(string from, string body) { string mailServerName = "plus.pop.mail.yahoo.com"; MailMessage message = new MailMessage(from, "[email protected]", "feedback", body); SmtpClient mailClient = new SmtpClient(); mailClient.Host = mailServerName; mailClient.Send(message); message.Dispose(); } I got the following error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 209.191.108.191:25

    Read the article

  • What is good server performance monitoring software for Windows?

    - by Luke
    I'm looking for some software to monitor a single server for performance alerts. Preferably free and with a reasonable default configuration. Edit: To clarify, I would like to run this software on a Windows machine and monitor a remote Windows server for CPU/memory/etc. usage alerts (not a single application). Edit: I suppose its not necessary that this software be run remotely, I would also settle for something that ran on the server and emailed me if there was an alert. It seems like Windows performance logs and alerts might be used for this purpose somehow but it was not immediately obvious to me. Edit: Found a neat tool on the coding horror blog, not as useful for remote monitoring but very useful for things you would worry about as a server admin: http://www.winsupersite.com/showcase/winvista_ff_rmon.asp

    Read the article

  • Why Tokyo Tyrant so slow

    - by Tantra
    I have follow situation tyrant server lunched on freebsd host, like this: ttserver -uas -log /data/tyrant/1.log -sid 1 -thnum 8 -tout 5 /data/tyrant/data/1.tct And i try to communicate this server on windows from python and pyrant-0.3.5: like this: import pyrant; import time; t = pyrant.Tyrant(host="192.168.0.220", port=1978); tbegin = time.time(); for i in xrange(4000000): if i and ((i % 10000) == 0): print time.time() - tbegin; tbegin = time.time(); t[i] = {"text": "ruslan text", "value": i}; and have i think very slow performance about 5-6 per 10,000 records. But if i start this code on the same machine like server(ttserver). Performance are good - about 0.5 sec per 10,000 records What i must do to workaround this problem?

    Read the article

  • How can I fast-forward a single git commit, programmatically?

    - by Norman Ramsey
    I periodically get message from git that look like this: Your branch is behind the tracked remote branch 'local-master/master' by 3 commits, and can be fast-forwarded. I would like to be able to write commands in a shell script that can do the following: How can I tell if my current branch can be fast-forwarded from the remote branch it is tracking? How can I tell how many commits "behind" my branch is? How can I fast-forward by just one commit, so that for example, my local branch would go from "behind by 3 commits" to "behind by 2 commits"? (For those who are interested, I am trying to put together a quality git/darcs mirror.)

    Read the article

  • How do you diagnose a 500 error on Heroku when there is no error message in the logs?

    - by lala
    I have a Rails app on Heroku that is serving 500 errors at random intervals. Web pages will display "Internal server error" in plain text, instead of the usual "We're sorry. Something went wrong." page. When I refresh the page, it works fine. The logs don't show me an error message, just » 14:20:34.107 2013-10-11 12:20:33.763690+00:00 heroku router - - at=info method=HEAD path=/ host=www.mydomain.com fwd="184.73.237.85/ec2-184-73-237-85.compute-1.amazonaws.com" dyno=web.1 connect=1ms service=63ms status=200 bytes=0 » 14:21:03.957 2013-10-11 12:21:03.561867+00:00 heroku router - - at=info method=GET path=/ host=www.mydomain.com fwd="50.112.95.211/ec2-50-112-95-211.us-west-2.compute.amazonaws.com" dyno=web.1 connect=0ms service=1ms status=500 bytes=21 Support has told me to look at request queuing in New Relic, but New Relic only shows a big red mark saying the server is down (even though the site works fine when refreshed). With no error messages, I'm at a loss for how to diagnose this issue.

    Read the article

  • Returning a DOM element with Webdriver in Javascript

    - by ehmicky
    How do I return a DOM Element with Webdriver in Javascript? I am using the wd.js Javascript bindings: require("wd") .remote("promiseChain") .init() .get("http://www.google.com") .elementById("mngb") .then(function(element) { console.log(element); }); I am getting this weird object that is not a standard DOM Element (for example I cannot get the HTML code out of it): { value: '0', browser: { domain: null, _events: {}, _maxListeners: 10, configUrl: { protocol: 'http:', slashes: true, auth: null, host: '127.0.0.1:4444', port: '4444', hostname: '127.0.0.1', hash: null, search: '', query: {}, pathname: '/wd/hub', path: '/wd/hub', href: 'http://127.0.0.1:4444/wd/hub' }, sauceRestRoot: 'https://saucelabs.com/rest/v1', noAuthConfigUrl: { protocol: 'http:', slashes: true, host: '127.0.0.1:4444', port: '4444', hostname: '127.0.0.1', hash: null, search: null, query: null, pathname: '/wd/hub', path: '/wd/hub', href: 'http://127.0.0.1:4444/wd/hub' }, defaultCapabilities: { browserName: 'firefox', version: '', javascriptEnabled: true, platform: 'ANY' }, _httpConfig: { timeout: undefined, retries: 3, retryDelay: 15, baseUrl: undefined }, sampleElement: { value: 1, browser: [Circular] }, sessionID: '238c9837-3d82-4d90-9594-cefb4ba8e6b9' } }

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

  • What does "warning: unable to unlink website: Operation not permitted" mean when checking out a Git

    - by James A. Rosen
    I'm trying to create a local branch that tracks a remote branch. Here's what I get: > git checkout master > git push origin origin:refs/heads/myBranch Total 0 (delta 0), reused 0 (delta 0) To [email protected]:myrepo/myproject.git * [new branch] origin/HEAD -> myBranch > git fetch origin > git checkout --track -b myBranch origin/myBranch warning: unable to unlink website: Operation not permitted Branch myBranch set up to track remote branch myBranch from origin. Switched to a new branch 'myBranch' What does "warning: unable to unlink website: Operation not permitted" mean? Did everything work fine?

    Read the article

  • msmq binding wcf

    - by pdiddy
    I have some messages in my queue. Now I notice that after 3 tries the service host faults. Is this a normal behavior? Where does the 3 times comes from? I thought it came from receiveRetryCount. But I set that one to 1. I got 20 messages in my queue waiting to be processed. The WCF operation that is responsible to process the message supports transaction so if it can't process the message it will throw so that the message stays in the queue. I didn't think that it would of Fault the ServiceHost after a number of retry, is this part documented somewhere? I'm running MSMQ service on my winxp machine. I'm more interested in documentation indicating that the service host will fault after a number of retry. Is this part true?

    Read the article

  • CruiseControl / NANT <copy> Task

    - by Striker
    We have a website with all the media (css/images) stored in a media folder. The media folder and it's 95 subdirectories contain about 400 total files. We have a Cruiscontrol project that monitors just the media directory for changes and when triggered copies those files to our integration server. Unfortunately, our integration server is at a remote location and so even when copying 2-3 files the NANT task is taking 4+ minutes. I believe the combination of the sheer number or directories/files and our network latency is causing the NANT task to run slow. I believe it is comparing the modified dates of both the local and remote copy of every file. I really want to speed this up and my initial thought was instead of trying to copy the whole media folder, can I get the list of file modifications from CruiseControl and specifically copy those files instead, saving the NANT task the work of having to compare them all for changes. Is there a way to do what I am asking or is there a better way to accomplish the same performance gains?

    Read the article

  • Amazon EC2 Load Balancer: Defending against DoS attack?

    - by netvope
    We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way. What is the best practice to defend against DoS attack in this scenario? In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • Example code of libssh2 being used for port forwarding

    - by flxkid
    I'm looking for an example of how to use libssh2 to setup ssh port forwarding. I've looked at the API, but there is very little in the way of documentation in the area of port forwarding. For instance, using PuTTY plink, There is the remote port to listen on, but also the local port that traffic should be sent to. Is it the developers responsibility to do this part? Can an example be developed of this? Also, for the opposite, where a remote port is brought to a local port, do I use libssh2_channel_direct_tcpip_ex? What about an example of this? I really need to do this exact thing on a project right now. How hard would it be to develop a couple of samples of this? I'm willing to put up a bounty if need be to get a couple of working examples of this.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Setting up a pc bluetooth server for android

    - by Del
    Alright, I've been reading a lot of topics the past two or three days and nothing seems to have asked this. I am writing a PC side server for my andriod device, this is for exchanging some information and general debugging. Eventually I will be connecting to a SPP device to control a microcontroller. I have managed, using the following (Android to pc) to connect to rfcomm channel 11 and exchange data between my android device and my pc. Method m = device.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); tmp = (BluetoothSocket) m.invoke(device, Integer.valueOf(11)); I have attempted the createRfcommSocketToServiceRecord(UUID) method, with absolutely no luck. For the PC side, I have been using the C Bluez stack for linux. I have the following code which registers the service and opens a server socket: int main(int argc, char **argv) { struct sockaddr_rc loc_addr = { 0 }, rem_addr = { 0 }; char buf[1024] = { 0 }; char str[1024] = { 0 }; int s, client, bytes_read; sdp_session_t *session; socklen_t opt = sizeof(rem_addr); session = register_service(); s = socket(AF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM); loc_addr.rc_family = AF_BLUETOOTH; loc_addr.rc_bdaddr = *BDADDR_ANY; loc_addr.rc_channel = (uint8_t) 11; bind(s, (struct sockaddr *)&loc_addr, sizeof(loc_addr)); listen(s, 1); client = accept(s, (struct sockaddr *)&rem_addr, &opt); ba2str( &rem_addr.rc_bdaddr, buf ); fprintf(stderr, "accepted connection from %s\n", buf); memset(buf, 0, sizeof(buf)); bytes_read = read(client, buf, sizeof(buf)); if( bytes_read 0 ) { printf("received [%s]\n", buf); } sprintf(str,"to Android."); printf("sent [%s]\n",str); write(client, str, sizeof(str)); close(client); close(s); sdp_close( session ); return 0; } sdp_session_t *register_service() { uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; uint8_t rfcomm_channel = 11; const char *service_name = "Remote Host"; const char *service_dsc = "What the remote should be connecting to."; const char *service_prov = "Your mother"; uuid_t root_uuid, l2cap_uuid, rfcomm_uuid, svc_uuid; sdp_list_t *l2cap_list = 0, *rfcomm_list = 0, *root_list = 0, *proto_list = 0, *access_proto_list = 0; sdp_data_t *channel = 0, *psm = 0; sdp_record_t *record = sdp_record_alloc(); // set the general service ID sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); sdp_set_service_id( record, svc_uuid ); // make the service record publicly browsable sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root_list = sdp_list_append(0, &root_uuid); sdp_set_browse_groups( record, root_list ); // set l2cap information sdp_uuid16_create(&l2cap_uuid, L2CAP_UUID); l2cap_list = sdp_list_append( 0, &l2cap_uuid ); proto_list = sdp_list_append( 0, l2cap_list ); // set rfcomm information sdp_uuid16_create(&rfcomm_uuid, RFCOMM_UUID); channel = sdp_data_alloc(SDP_UINT8, &rfcomm_channel); rfcomm_list = sdp_list_append( 0, &rfcomm_uuid ); sdp_list_append( rfcomm_list, channel ); sdp_list_append( proto_list, rfcomm_list ); // attach protocol information to service record access_proto_list = sdp_list_append( 0, proto_list ); sdp_set_access_protos( record, access_proto_list ); // set the name, provider, and description sdp_set_info_attr(record, service_name, service_prov, service_dsc); int err = 0; sdp_session_t *session = 0; // connect to the local SDP server, register the service record, and // disconnect session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); err = sdp_record_register(session, record, 0); // cleanup //sdp_data_free( channel ); sdp_list_free( l2cap_list, 0 ); sdp_list_free( rfcomm_list, 0 ); sdp_list_free( root_list, 0 ); sdp_list_free( access_proto_list, 0 ); return session; } And another piece of code, in addition to 'sdptool browse local' which can verifty that the service record is running on the pc: int main(int argc, char **argv) { uuid_t svc_uuid; uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; int err; bdaddr_t target; sdp_list_t *response_list = NULL, *search_list, *attrid_list; sdp_session_t *session = 0; str2ba( "01:23:45:67:89:AB", &target ); // connect to the SDP server running on the remote machine session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); // specify the UUID of the application we're searching for sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); search_list = sdp_list_append( NULL, &svc_uuid ); // specify that we want a list of all the matching applications' attributes uint32_t range = 0x0000ffff; attrid_list = sdp_list_append( NULL, &range ); // get a list of service records that have UUID 0xabcd err = sdp_service_search_attr_req( session, search_list, \ SDP_ATTR_REQ_RANGE, attrid_list, &response_list); sdp_list_t *r = response_list; // go through each of the service records for (; r; r = r-next ) { sdp_record_t *rec = (sdp_record_t*) r-data; sdp_list_t *proto_list; // get a list of the protocol sequences if( sdp_get_access_protos( rec, &proto_list ) == 0 ) { sdp_list_t *p = proto_list; // go through each protocol sequence for( ; p ; p = p-next ) { sdp_list_t *pds = (sdp_list_t*)p-data; // go through each protocol list of the protocol sequence for( ; pds ; pds = pds-next ) { // check the protocol attributes sdp_data_t *d = (sdp_data_t*)pds-data; int proto = 0; for( ; d; d = d-next ) { switch( d-dtd ) { case SDP_UUID16: case SDP_UUID32: case SDP_UUID128: proto = sdp_uuid_to_proto( &d-val.uuid ); break; case SDP_UINT8: if( proto == RFCOMM_UUID ) { printf("rfcomm channel: %d\n",d-val.int8); } break; } } } sdp_list_free( (sdp_list_t*)p-data, 0 ); } sdp_list_free( proto_list, 0 ); } printf("found service record 0x%x\n", rec-handle); sdp_record_free( rec ); } sdp_close(session); } Output: $ ./search rfcomm channel: 11 found service record 0x10008 sdptool: Service Name: Remote Host Service Description: What the remote should be connecting to. Service Provider: Your mother Service RecHandle: 0x10008 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 11 And for logcat I'm getting this: 07-22 15:57:06.087: ERROR/BTLD(215): ****************search UUID = 0000*********** 07-22 15:57:06.087: INFO//system/bin/btld(209): btapp_dm_GetRemoteServiceChannel() 07-22 15:57:06.087: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:06.097: INFO/ActivityManager(88): Displayed activity com.example.socktest/.socktest: 79 ms (total 79 ms) 07-22 15:57:06.697: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:07.517: WARN/BTLD(215): ccb timer ticks: 2147483648 07-22 15:57:07.517: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:07.547: WARN/BTLD(215): info:x10 07-22 15:57:07.547: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 10 pbytes (hdl 14) 07-22 15:57:07.547: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up() 07-22 15:57:07.547: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up: dummy_handle = 342 07-22 15:57:07.547: DEBUG/ADAPTER(253): adapter_get_device(00:02:72:AB:7C:EE) 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:07.777: WARN/BTLD(215): process_service_search_attr_rsp 07-22 15:57:07.787: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 14) 07-22 15:57:07.787: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_rmt_service_channel: success=0, service=00000000 07-22 15:57:07.787: ERROR/DTUN_HCID_BZ4(253): discovery unsuccessful! 07-22 15:57:08.497: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:09.507: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:09.597: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 11 pbytes (hdl 14) 07-22 15:57:09.597: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down() 07-22 15:57:09.597: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down device = 0xf7a0 handle = 342 reason = 22 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:09.597: DEBUG/BluetoothA2dpService(88): Received intent Intent { act=android.bluetooth.device.action.ACL_DISCONNECTED (has extras) } 07-22 15:57:10.107: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:12.107: DEBUG/BluetoothService(88): Cleaning up failed UUID channel lookup: 00:02:72:AB:7C:EE 00000000-0000-0000-0000-000000000000 07-22 15:57:12.107: ERROR/Socket Test(5234): connect() failed 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: fd 31 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: bind not completed on this socket 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: event BTLIF_BTS_EVT_ABORT matched 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wrp_close_s_only [31] (31:-1) [] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: data socket closed 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wsactive_del: delete wsock 31 from active list [ad3e1494] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wsock fully closed, return to pool 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btsk_free: success 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_destroy 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: btsk not found, normal close (31) 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 33 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (33) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 32 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (32) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 31 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (31) 07-22 15:57:12.157: DEBUG/Sensors(88): close_akm, fd=151 07-22 15:57:12.167: ERROR/CachedBluetoothDevice(477): onUuidChanged: Time since last connect14970690 07-22 15:57:12.237: DEBUG/Socket Test(5234): -On Stop- Sorry for bombarding you guys with what seems like a difficult question and a lot to read, but I've been working on this problem for a while and I've tried a lot of different things to get this working. Let me reiterate, I can get it to work, but not using service discovery protocol. I've tried a several different UUIDs and on two different computers, although I only have my HTC Incredible to test with. I've also heard some rumors that the BT stack wasn't working on the HTC Droid, but that isn't the case, at least, for PC interaction.

    Read the article

  • Using TortoiseHg to push to an authenticated git repository

    - by Nathan Palmer
    I'm trying to push a changeset from a local Mercurial repository created with TortoiseHg to a remote Git repository. I have hg-git installed and configured and it will pull just fine. But when I run the push it gives me this Command hg push git+ssh://git@dummyrepo:username/repo.git Result pushing to git+ssh://git@dummyrepo:username/repo.git importing Hg objects into Git creating and sending data abort: the remote end hung up unexpectedly There are several things I've done to get to this point. But I'm hoping to resolve this last thing because I find TortoiseHg to be much easier to work with than any of the Git tools out there (for windows.) Installed TortoiseHg Pulled down the hg-git from http://bitbucket.org/durin42/hg-git/ Configured mercurial.ini to point to the hg-git library Pulled down dulwich source from git://git.samba.org/jelmer/dulwich.git Compiled dulwich and put it into library.zip for TortoiseHg Configured TortoiseHg to use TortoisePlink.exe for ssh Added my private key to Pageant Any ideas what I could be missing?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >