Search Results

Search found 13262 results on 531 pages for 'complete validation'.

Page 422/531 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Get email address from OpenID provider (Janrain openid library)

    - by Moak
    When signing in to stackoverflow with google I get this message Stackoverflow.com is asking for some information from your Google Account [email protected] • Email address: [email protected] However on my site I can log in with openid but I can't ask for the email address. I get this message You are signing in to example.com with your Google Account [email protected] Also I'm finding it hard to know at what step I need to ask for it, here's some code where I think that step should be built into. /** * Authenticates the given OpenId identity. * Defined by Zend_Auth_Adapter_Interface. * * @throws Zend_Auth_Adapter_Exception If answering the authentication query is impossible * @return Zend_Auth_Result */ public function authenticate() { $id = $this->_id; $consumer = new Auth_OpenID_Consumer($this->_storage); if (!empty($id)) { $authRequest = $consumer->begin($id); if (is_null($authRequest)) { return new Zend_Auth_Result( Zend_Auth_Result::FAILURE, $id, array("Authentication failed", 'Unknown error')); } if (Auth_OpenID::isFailure($authRequest)) { return new Zend_Auth_Result( Zend_Auth_Result::FAILURE, $id, array("Authentication failed", "Could not redirect to server: " . $authRequest->message)); } $redirectUrl = $authRequest->redirectUrl($this->_root, $this->_returnTo); if (Auth_OpenID::isFailure($redirectUrl)) { return new Zend_Auth_Result( Zend_Auth_Result::FAILURE, $id, array("Authentication failed", $redirectUrl->message)); } Zend_OpenId::redirect($redirectUrl); } else { $response = $consumer->complete(Zend_OpenId::selfUrl()); switch($response->status) { case Auth_OpenID_CANCEL: case Auth_OpenID_FAILURE: return new Zend_Auth_Result( Zend_Auth_Result::FAILURE, null, array("Authentication failed. " . @$response->message)); break; case Auth_OpenID_SUCCESS: return $this->_constructSuccessfulResult($response); break; } } } It seems like such an obvious thing but I'm having a hard time googling and combing through the code just to find this. Thanks!

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint Part 2

    - by BeraCim
    Hi all: Following this post which I posted some time ago, I now get the same error every time I try to rewire 2 web's URLs. Basically, this is the code. It runs in a LongRunningOperationJob: SPWeb existingWeb = null; using (existingWeb = site.OpenWeb(wedId)) { SPWeb destinationWeb = createNewSite(existingWeb); existingWeb.AllowUnsafeUpdates = true; existingWeb.Name = existingWeb.Name + "_old"; existingWeb.Title = existingWeb.Title + "_old"; existingWeb.Description = existingWeb.Description + "_old"; existingWeb.Update() existingWeb.AllowUnsafeUpdates = false; destinationWeb.AllowUnsafeUpdates = true; destinationWeb.Name = existingWeb.Name; destinationWeb.Title = existingWeb.Title; destinationWeb.Description = existingWeb.Description; destinationWeb.Update(); destinationWeb.AllowUnsafeUpdates = false; // null this for what its worth existingWeb = null; destinationWeb = null; } // <---- Exception raised here Basically, the code is trying to rename the existing site's URL to something else, and have the destination web's url point to the old site's URL. When I run this for the first time, I received the Exception mentioned in the subject. However, every run after, I do not see the exception anymore. The webs DO get rewired... but at the cost of the app dying an unnecessary and terrible death. I'm at a complete lost as to what is going on and needs urgent help. Does sharepoint keep any hidden table from me or is the logic above has fatal problems? Thanks.

    Read the article

  • FileReference.download() not working

    - by Lowgain
    I'm building a Flex app which requires me to download files. I have the following code: public function execute(event:CairngormEvent) : void { var evt:StemDownloadEvent = event as StemDownloadEvent; var req:URLRequest = new URLRequest(evt.data.file_path); var localRef:FileReference = new FileReference(); localRef.addEventListener(Event.OPEN, _open); localRef.addEventListener(ProgressEvent.PROGRESS, _progress); localRef.addEventListener(Event.COMPLETE, _complete); localRef.addEventListener(Event.CANCEL, _cancel); localRef.addEventListener(Event.SELECT, _select); localRef.addEventListener(SecurityErrorEvent.SECURITY_ERROR, _securityError); localRef.addEventListener(IOErrorEvent.IO_ERROR, _ioError); try { localRef.download(req); } catch (e:Error) { SoundRoom.logger.log(e); } } As you can see, I hooked up every possible event listener as well. When this executes, I get the browse window, and am able to select a location, and click save. After that, nothing happens. I have each event handler hooked up to my logger, and not a single one is being called! Is there something missing here?

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • Silverlight Async Design Pattern Issue

    - by Mike Mengell
    I'm in the middle of a Silverlight application and I have a function which needs to call a webservice and using the result complete the rest of the function. My issue is that I would have normally done a synchronous web service call got the result and using that carried on with the function. As Silverlight doesn't support synchronous web service calls without additional custom classes to mimic it, I figure it would be best to go with the flow of async rather than fight it. So my question relates around whats the best design pattern for working with async calls in program flow. In the following example I want to use the myFunction TypeId parameter depending on the return value of the web service call. But I don't want to call the web service until this function is called. How can I alter my code design to allow for the async call? string _myPath; bool myFunction(Guid TypeId) { WS_WebService1.WS_WebService1SoapClient proxy = new WS_WebService1.WS_WebService1SoapClient(); proxy.GetPathByTypeIdCompleted += new System.EventHandler<WS_WebService1.GetPathByTypeIdCompleted>(proxy_GetPathByTypeIdCompleted); proxy.GetPathByTypeIdAsync(TypeId); // Get return value if (myPath == "\\Server1") { //Use the TypeId parameter in here } } void proxy_GetPathByTypeIdCompleted(object sender, WS_WebService1.GetPathByTypeIdCompletedEventArgs e) { string server = e.Result.Server; myPath = '\\' + server; } Thanks in advance, Mike

    Read the article

  • How to implement progressbar(to show progress) using threading concept in win 32?

    - by Rakesh
    I am trying to show a progress bar while my process is going on...in my application there will be a situation where I gotta read files and manipulate them(it will take some time to complete)..want to display a progress bar during this operation..the particular function I am calling is an win 32 ...so if you check my code below i am upto the point of creating the progress bar in a dialog window and creating a thread Now I dont know how to post the message and where to get the message and handle...Please help me..thanks in advance //my function int Myfunction(....) { HWND dialog = CreateWindowEx(0,WC_DIALOG,L"Proccessing...",WS_OVERLAPPEDWINDOW|WS_VISIBLE, 600,300,280,120,NULL,NULL,NULL,NULL); HWND pBar = CreateWindowEx(NULL,PROGRESS_CLASS,NULL,WS_CHILD|WS_VISIBLE,40,20,200, 20, dialog,(HMENU)IDD_PROGRESS,NULL,NULL); HANDLE getHandle = CreateThread(NULL,NULL,(LPTHREAD_START_ROUTINE)SetFilesForOperation(...), NULL,NULL,0); } LPARAM SetFilesForOperation(...) { for(int index = 0;index < noOfFiles; index++) { *checkstate = *(checkState + index); if(*checkstate == -1) { *(getFiles+i) = new TCHAR[MAX_PATH]; wcscpy(*(getFiles+i),*(dataFiles +index)); i++; } else { (*tempDataFiles)->Add(*(dataFiles+index)); *(checkState + localIndex) = *(checkState + index); localIndex++; } //SendMessage(pBar,PBM_SETSTEP,1,0); } }

    Read the article

  • Should I expect Comet to be this slow?

    - by Chad Johnson
    I have the following in a Rails controller: def poll records = [] start_time = Time.now.to_i while records.length == 0 do records = Something.uncached{Something.find(:all, :conditions => { :some_condition => false})} if records.length > 0 break end sleep 1 if Time.now.to_i - start_time >= 20 break end end responseData = [] records.each do |record| responseData << { 'something' => record.some_value } # Flag message as received. record.some_condition = true record.save end render :text => responseData.to_json end and then I have Javascript performing an AJAX request. The request sits there for 20 seconds or until the controller method finds a record in the database, waiting. That works. function poll() { $.ajax({ url: '/my_controller/poll', type: 'GET', dataType: 'json', cache: false, data: 'time=' + new Date().getTime(), success: function(response) { // show response here }, complete: function() { poll(); }, error: function() { alert('error'); poll(); } }); } When I have 5 - 10 tabs open in my browser, my web application becomes super slow. Is this to be expected? Or is there some obvious improvement(s) I can make?

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • Visual Studio 2008 Setup Project - Include .NET Framework 3.5

    - by rajadiga
    I built a Visual Studio 2008 setup Project wich depends on .NET 3.5. I added Prerequisites like: .NET 3.5, Microsoft office interoperability, VS tools for office System 3.0 Run time, .etc. After that Selected "Download Prerequisite from Same location as my application" in Specify install location for Prerequisite. I Built the setup and found mysetup.msi in Release directory. In a new machine I started fresh installation of my application. A dialog shows like this "This Setup Requires .NET framework 3.5 , Please install .NET setup then run this setup, .NET Framework can be obtained from web Do you want to do that now?" it gives "Yes" and "No" option - if I press yes it goes to Microsoft website. How can avoid it? I wanted setup take .NET Framework to be installed from same location where I put all setup files including mysetup.msi? In case of quiet installation cmd /c "msiexec /package mysetup.msi /quiet /log install.log" ..in log I can see only half way through installation then error Property(S): HideFatalErrorForm = TRUE MSI (s) (D0:24) [00:07:08:015]: Product: my product-- Installation failed. === Logging stopped: 3/23/2010 0:07:08 === How can complete I the installation without user intervention and without error using VS2008 setup project? Thanks for all the help in advance for any input.

    Read the article

  • jQuery addClass() not running before jQuery.ajax()

    - by Josh
    I'm trying to have a button that onclick will apply a class loading (sets cursor to "wait") to the body, before making a series of ajax requests. Code is: $('#addSelected').click(function(){ $('body').addClass('loading'); var products = $(':checkbox[id^=add_]:checked'); products.each(function(){ var prodID = $(this).attr('id').replace('add_', ''); var qty = $('#qty_' + prodID).val(); if($('#prep_' + prodID).val()) { prodID += ':' + $('#prep_' + prodID).val(); } // Have to use .ajax rather than .get so we can use async:false, otherwise // product adds happen in parallel, causing data to be lost. $.ajax({ url: '<?=base_url()?>basket/update/' + prodID + '/' + qty, async: false, beforeSend: function(){ $('body').addClass('loading'); } }); }); }); I've tried doing $('body').addClass('loading'); both before the requests, and as a beforeSend callback, but there is no difference. In firebug I can see that body doesn't get the loading class until after the requests are complete. Any ideas?

    Read the article

  • problem on running script on different operating system

    - by Praveen kalal
    i run use a javascript code for getting browser information it run fine on microsoft windows xp but it not working on microsoft windows server 2003. my code is folowing. plz help. <html> <head> <script type="text/javascript" src="zeroclipboard/ZeroClipboard.js"></script> <script type="text/javascript"> window.onload = function F() { var today = new Date(); var the_date = new Date("December 31, 2012"); var the_cookie_date = the_date.toGMTString(); var the_cookie = screen.width +"x"+ screen.height; var the_cookie = "Screen Resolution:"+the_cookie + ";\nExpires:" + the_cookie_date+";\n Browser CodeName:"+navigator.appCodeName+";\n Browser Name: " + navigator.appName+";\n Browser Version: " + navigator.appVersion+";\n Browser Version: " + navigator.appVersion+"; \n Cookies Enabled: " + navigator.cookieEnabled +";\n Platform: " + navigator.platform+";\n User-agent header: " + navigator.userAgent; / document.getElementById('box-content').value=the_cookie; } </script> </head> <body> <textarea name="box-content" id="box-content" rows="10" cols="70"> </textarea> <br /><br /> <p><input type="button" id="copy" name="copy" value="Copy to Clipboard" /></p> </body> </html> <script type="text/javascript"> //set path ZeroClipboard.setMoviePath('http://192.168.101.135:471/browserinfo/zeroclipboard/ZeroClipboard.swf'); //create client var clip = new ZeroClipboard.Client(); //event clip.addEventListener('mousedown',function() { clip.setText(document.getElementById('box-content').value); }); clip.addEventListener('complete',function(client,text) { alert('text is copied'); }); //glue it to the button clip.glue('copy'); </script>

    Read the article

  • MongoDB in Go (golang) with mgo: How do I update a record, find out if update was successful and get the data in a single atomic operation?

    - by Sebastián Grignoli
    I am using mgo driver for MongoDB under Go. My application asks for a task (with just a record select in Mongo from a collection called "jobs") and then registers itself as an asignee to complete that task (an update to that same "job" record, setting itself as assignee). The program will be running on several machines, all talking to the same Mongo. When my program lists the available tasks and then picks one, other instances might have already obtained that assignment, and the current assignment would have failed. How can I get sure that the record I read and then update does or does not have a certain value (in this case, an assignee) at the time of being updated? I am trying to get one assignment, no matter wich one, so I think I should first select a pending task and try to assign it, keeping it just in the case the updating was successful. So, my query should be something like: "From all records on collection 'jobs', update just one that has asignee=null, setting my ID as the assignee. Then, give me that record so I could run the job." How could I express that with mgo driver for Go?

    Read the article

  • Why i cant save a long text on my MySQL database?

    - by DomingoSL
    im trying to save to my data base a long text (about 2500 chars) input by my users using a web form and passed to the server using php. When i look in phpmyadmin, the text gets crop. How can i config my table in order to get the complete text? This is my table config: CREATE TABLE `extra_879` ( `id` bigint(20) NOT NULL auto_increment, `id_user` bigint(20) NOT NULL, `title` varchar(300) NOT NULL, `content` varchar(3000) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_user` (`id_user`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; Take a look of the field content that have a limit of 3000 chars, but the texts always gets crop at 690 chars. Thanks for any help! EDIT: I found the problem but i dont know how to solve it. The query is getting crop always in the same char, an special char: ù EDIT 2: This is the cropped query: INSERT INTO extra_879 (id,id_user,title,content) VALUES (NULL,'1','Informazione Extra',' Riconoscimenti Laurea di ingegneria presa a le 22 anni e in il terso posto della promozione Diploma analista di sistemi ottenuto il rating massimo 20/20, primo posto della promozione. Borsa di Studio (offerta dal Ministero Esteri Italiano) vinta nel 2010 (Valutazione del territorio attraverso le nueve tecnologie) Pubblicazione di paper; Stima del RCS della nave CCGS radar sulla base dei risultati di H. Leong e H. Wilson. http://www.ing.uc.edu.vek-azozayalarchivospdf/PAPER-Sarmiento.pdf Tesi di laurea: PROGETTAZIONE E REALIZZAZIONE DI UN SIS-TEMA DI TELEMETRIA GSM PER IL CONTROLLO DELLO STATO DI TRANSITO VEICOLARE E CLIMA (ottenuto il punteggio pi') It gets crop just when the (ottenuto il punteggio più alto) phrase, just when ù appear... EDIT 3: I using jquery + ajax to send the query $.ajax({type: "POST", url: "handler.php", data: "e_text="+ $('#e_text').val() + "&e_title="+ $('#extra_title').val(),

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • .NET: How to know when serialization is completed?

    - by Ian Boyd
    When I construct my control (which inherits DataGrid), I add specific rows and columns. This works great at design time. Unfortunately, at runtime I add my rows and columns in the same constructor, but then the DataGrid is serialized (after the constructor runs) adding more rows and columns. After serialization is complete, I need to clear everything and re-initialize the rows and columns. Is there a protected method that I can override to know when the control is done serializing? Of course, I'd prefer to not have to do the work in the constructor, throw it away, and do it again after (potential) serialization. Is there a preferred event that is the equivalent of "set yourself up now", so that it is called once whether I'm serialized or not? The serialization i speak of comes from the InitializeComponent() method in the form's code-behind file. #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { ... } It would have been perfect if InitializeComponent was a virtual method defined by Control, then i could just override it and then perform my processing after i call base: protected override void InitializeComponent() { base.InitializeComponent(); InitializeMe(); } But it's not an ancestor method, it's declared only in the code-behind file. i notice that InitializeComponent calls SuspendLayout and ResumeLayout on various Controls. i thought it could override ResumeLayout, and perform my initialization then: public override void ResumeLayout() { base.ResumeLayout(); InitializeMe(); } But ResumeLayout is not virtual, so that's out. Anymore ideas? i can't be the first person to create a custom control.

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • I'm trying to build a query to search against a fulltext index in mysql

    - by Rockinelle
    The table's schema is pretty simple. I have a child table that stores a customer's information like address and phone number. The columns are user_id, fieldname, fieldvalue and fieldname. So each row will hold one item like phone number, address or email. This is to allow an unlimited number of each type of information for each customer. The people on the phones need to look up these customers quickly as they call into our call center. I have experimented with using LIKE% and I'm working with a FULLTEXT index now. My queries work, but I want to make them more useful because if someone searches for a telephone area code like 805 that will bring up many people, and then they add the name Bill to narrow it down, '805 Bill'. It will show EVERY customer that has 805 OR Bill. I want it to do AND searches across multiple rows within each customer. Currently I'm using the query below to grab the user_ids and later I do another query to fetch all the details for each user to build their complete record. SELECT DISTINCT `user_id` FROM `user_details` WHERE MATCH (`fieldvalue`) AGAINST ('805 Bill') Again, I want to do the above query against groups of rows that belong to a single user, but those users have to match the search keywords. What should I do?

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Python: How can I use Twisted as the transport for SUDS?

    - by jathanism
    I have a project that is based on Twisted used to communicate with network devices and I am adding support for a new vendor (Citrix NetScaler) whose API is SOAP. Unfortunately the support for SOAP in Twisted still relies on SOAPpy, which is badly out of date. In fact as of this question (I just checked), twisted.web.soap itself hasn't even been updated in 21 months! I would like to ask if anyone has any experience they would be willing to share with utilizing Twisted's superb asynchronous transport functionality with SUDS. It seems like plugging in a custom Twisted transport would be a natural fit in SUDS' Client.options.transport, I'm just having a hard time wrapping my head around it. I did come up with a way to call the SOAP method with SUDS asynchronously by utilizing twisted.internet.threads.deferToThread(), but this feels like a hack to me. Here is an example of what I've done, to give you an idea: # netscaler is a module I wrote using suds to interface with NetScaler SOAP # Source: http://bitbucket.org/jathanism/netscaler-api/src import netscaler import os import sys from twisted.internet import reactor, defer, threads # netscaler.API is the class that sets up the suds.client.Client object host = 'netscaler.local' username = password = 'nsroot' wsdl_url = 'file://' + os.path.join(os.getcwd(), 'NSUserAdmin.wsdl') api = netscaler.API(host, username=username, password=password, wsdl_url=wsdl_url) results = [] errors = [] def handleResult(result): print '\tgot result: %s' % (result,) results.append(result) def handleError(err): sys.stderr.write('\tgot failure: %s' % (err,)) errors.append(err) # this converts the api.login() call to a Twisted thread. # api.login() should return True and is is equivalent to: # api.service.login(username=self.username, password=self.password) deferred = threads.deferToThread(api.login) deferred.addCallbacks(handleResult, handleError) reactor.run() This works as expected and defers return of the api.login() call until it is complete, instead of blocking. But as I said, it doesn't feel right. Thanks in advance for any help, guidance, feedback, criticism, insults, or total solutions.

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How can I work out what events are being waited for with WinDBG in a kernel debug session

    - by Benj
    I'm a complete WinDbg newbie and I've been trying to debug a WindowsXP problem that a customer has sent me where our software and some third party software prevent windows from logging off. I've reproduced the problem and have verified that only when our software and the customers software are both installed (although not necessarily running at logoff) does the log off problem occur. I've observed that WM_ENDSESSION messages are not reaching the running windows when the user tries to log off and I know that the third party software uses a kernel driver. I've been looking at the processes in WinDbg and I know that csrss.exe would normally send all the windows a WM_ENDSESSION message. When I ran: !process 82356020 6 To look at csrss.exe's stack I can see: WARNING: Frame IP not in any known module. Following frames may be wrong. 00000000 00000000 00000000 00000000 00000000 0x7c90e514 THREAD 8246d998 Cid 0248.02a0 Teb: 7ffd7000 Win32Thread: e1627008 WAIT: (WrUserRequest) UserMode Non-Alertable 8243d9f0 SynchronizationEvent 81fe0390 SynchronizationEvent Not impersonating DeviceMap e1004450 Owning Process 82356020 Image: csrss.exe Attached Process N/A Image: N/A Wait Start TickCount 1813 Ticks: 20748 (0:00:05:24.187) Context Switch Count 3 LargeStack UserTime 00:00:00.000 KernelTime 00:00:00.000 Start Address 0x75b67cdf Stack Init f80bd000 Current f80bc9c8 Base f80bd000 Limit f80ba000 Call 0 Priority 14 BasePriority 13 PriorityDecrement 0 DecrementCount 0 Kernel stack not resident. ChildEBP RetAddr Args to Child f80bc9e0 80500ce6 00000000 8246d998 804f9af2 nt!KiSwapContext+0x2e (FPO: [Uses EBP] [0,0,4]) f80bc9ec 804f9af2 804f986e e1627008 00000000 nt!KiSwapThread+0x46 (FPO: [0,0,0]) f80bca24 bf80a4a3 00000002 82475218 00000001 nt!KeWaitForMultipleObjects+0x284 (FPO: [Non-Fpo]) f80bca5c bf88c0a6 00000001 82475218 00000000 win32k!xxxMsgWaitForMultipleObjects+0xb0 (FPO: [Non-Fpo]) f80bcd30 bf87507d bf9ac0a0 00000001 f80bcd54 win32k!xxxDesktopThread+0x339 (FPO: [Non-Fpo]) f80bcd40 bf8010fd bf9ac0a0 f80bcd64 00bcfff4 win32k!xxxCreateSystemThreads+0x6a (FPO: [Non-Fpo]) f80bcd54 8053d648 00000000 00000022 00000000 win32k!NtUserCallOneParam+0x23 (FPO: [Non-Fpo]) f80bcd54 7c90e514 00000000 00000022 00000000 nt!KiFastCallEntry+0xf8 (FPO: [0,0] TrapFrame @ f80bcd64) This waitForMultipleObjects looks interesting because I'm wondering if csrss.exe is waiting on some event which isn't arriving to allow the logoff. Can anyone tell me how I might find out what event it's waiting for anything else I might do to further investigate the problem?

    Read the article

  • complex web forms and javascript

    - by Casey
    I need to create a few data heavy complicated forms. Currently, the information is being entered into a spread sheet, but the users will need to enter the information into the online form where it will be saved to a database. The problem is that the business users currently using the spread sheet aren't going to want to use the online application if it isn't as easy as entering the information into the spread sheet. This is further complicated in that the information they are entering into the spread sheet is represented by three different DB tables where one "object" is composed of two of the others. I would prefer to not have them have to go through multiple forms. Some of what I have been thinking is: Use of auto complete where possible Hiding/removing form fields dynamically possible wizard style page flow?? I've been googling for other data heavy web forms but can't seem to really find any good examples. I am familiar with jQuery and prototypejs and have also tried googling for frameworks designed for data heavy applications but didn't come up with anything. Any thoughts? Thanks.

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >