Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 728/788 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • LINQ aggregate left join on SQL CE

    - by P Daddy
    What I need is such a simple, easy query, it blows me away how much work I've done just trying to do it in LINQ. In T-SQL, it would be: SELECT I.InvoiceID, I.CustomerID, I.Amount AS AmountInvoiced, I.Date AS InvoiceDate, ISNULL(SUM(P.Amount), 0) AS AmountPaid, I.Amount - ISNULL(SUM(P.Amount), 0) AS AmountDue FROM Invoices I LEFT JOIN Payments P ON I.InvoiceID = P.InvoiceID WHERE I.Date between @start and @end GROUP BY I.InvoiceID, I.CustomerID, I.Amount, I.Date ORDER BY AmountDue DESC The best equivalent LINQ expression I've come up with, took me much longer to do: var invoices = ( from I in Invoices where I.Date >= start && I.Date <= end join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{ I.InvoiceID, I.CustomerID, AmountInvoiced = I.Amount, InvoiceDate = I.Date, AmountPaid = ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault(), AmountDue = I.Amount - ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault() } ).OrderByDescending(row=>row.AmountDue); This gets an equivalent result set when run against SQL Server. Using a SQL CE database, however, changes things. The T-SQL stays almost the same. I only have to change ISNULL to COALESCE. Using the same LINQ expression, however, results in an error: There was an error parsing the query. [ Token line number = 4, Token line offset = 9,Token in error = SELECT ] So we look at the generated SQL code: SELECT [t3].[InvoiceID], [t3].[CustomerID], [t3].[Amount] AS [AmountInvoiced], [t3].[Date] AS [InvoiceDate], [t3].[value] AS [AmountPaid], [t3].[value2] AS [AmountDue] FROM ( SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], COALESCE(( SELECT SUM([t1].[Amount]) FROM [Payments] AS [t1] WHERE [t0].[InvoiceID] = [t1].[InvoiceID] ),0) AS [value], [t0].[Amount] - (COALESCE(( SELECT SUM([t2].[Amount]) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ),0)) AS [value2] FROM [Invoices] AS [t0] ) AS [t3] WHERE ([t3].[Date] >= @p0) AND ([t3].[Date] <= @p1) ORDER BY [t3].[value2] DESC Ugh! Okay, so it's ugly and inefficient when run against SQL Server, but we're not supposed to care, since it's supposed to be quicker to write, and the performance difference shouldn't be that large. But it just doesn't work against SQL CE, which apparently doesn't support subqueries within the SELECT list. In fact, I've tried several different left join queries in LINQ, and they all seem to have the same problem. Even: from I in Invoices join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{I, payments} generates: SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], [t1].[InvoiceID] AS [InvoiceID2], [t1].[Amount] AS [Amount2], [t1].[Date] AS [Date2], ( SELECT COUNT(*) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ) AS [value] FROM [Invoices] AS [t0] LEFT OUTER JOIN [Payments] AS [t1] ON [t0].[InvoiceID] = [t1].[InvoiceID] ORDER BY [t0].[InvoiceID] which also results in the error: There was an error parsing the query. [ Token line number = 2, Token line offset = 5,Token in error = SELECT ] So how can I do a simple left join on a SQL CE database using LINQ? Am I wasting my time?

    Read the article

  • jquery anchor to html extract

    - by Benjamin Ortuzar
    I would like to implement something similar to the Google quick scroll extension with jquery for the extracts of a search result, so when the full document is opened (within the same website) it gives the user the opportunity to go straight to the extract location. Here is a sample of what I get returned from the search engine when I search for 'food'. <doc> <docid>129305</docid> <title><span class='highlighted'>Food</span></title> <summary> <summarytext>Papers subject to Negative Resolution: 4 <span class='highlighted'>Food</span> <span class='highlighted'>Food</span> Irradiation (England) Regulations 2009 (S.I., 2009, No. 1584), dated 24 June 2009 (by Act), </summarytext> </summary> <paras> <paraitemcount>2</paraitemcount> <para> <paraitem>1</paraitem> <paraid>42</paraid> <pararelevance>100</pararelevance> <paraweights>50</paraweights> <paratext>4 <span class='highlighted'>Food</span></paratext> </para> <para> <paraitem>2</paraitem> <paraid>54</paraid> <pararelevance>100</pararelevance> <paraweights>50</paraweights> <paratext><span class='highlighted'>Food</span> Irradiation (England) Regulations 2009 (S.I., 2009, No. 1584), dated 24 June 2009 (by Act), with an Explanatory Memorandum and an Impact Assessment (</paratext> </para> </paras> </doc> As you see the search engine has returned a document that contains one summary and two extracts. So let's say the user clicks on the second extract in the search resutls page, the browser would open the detailed document in the same website, and would offer the user the possibility to go to the extract as the Google quick scroll extension does. Is there an existing jquery script for this? If not, can you suggest any jquery/javascript code that would simplify my task to implement this. Notes: I can access the extracts from the document details page. I'm aware that the HTML in some cases could be slightly different in the extract than in the details page, finding no match. The search engine does not return where the extract was located. At the moment I'm trying to understand the JS code that the extension uses.

    Read the article

  • List of Django model instance foreign keys losing consistency during state changes.

    - by Joshua
    I have model, Match, with two foreign keys: class Match(model.Model): winner = models.ForeignKey(Player) loser = models.ForeignKey(Player) When I loop over Match I find that each model instance uses a unique object for the foreign key. This ends up biting me because it introduces inconsistency, here is an example: >>> def print_elo(match_list): ... for match in match_list: ... print match.winner.id, match.winner.elo ... print match.loser.id, match.loser.elo ... >>> print_elo(teacher_match_list) 4 1192.0000000000 2 1192.0000000000 5 1208.0000000000 2 1192.0000000000 5 1208.0000000000 4 1192.0000000000 >>> teacher_match_list[0].winner.elo = 3000 >>> print_elo(teacher_match_list) 4 3000 # Object 4 2 1192.0000000000 5 1208.0000000000 2 1192.0000000000 5 1208.0000000000 4 1192.0000000000 # Object 4 >>> I solved this problem like so: def unify_refrences(match_list): """Makes each unique refrence to a model instance non-unique. In cases where multiple model instances are being used django creates a new object for each model instance, even if it that means creating the same instance twice. If one of these objects has its state changed any other object refrencing the same model instance will not be updated. This method ensure that state changes are seen. It makes sure that variables which hold objects pointing to the same model all hold the same object. Visually this means that a list of [var1, var2] whose internals look like so: var1 --> object1 --> model1 var2 --> object2 --> model1 Will result in the internals being changed so that: var1 --> object1 --> model1 var2 ------^ """ match_dict = {} for match in match_list: try: match.winner = match_dict[match.winner.id] except KeyError: match_dict[match.winner.id] = match.winner try: match.loser = match_dict[match.loser.id] except KeyError: match_dict[match.loser.id] = match.loser My question: Is there a way to solve the problem more elegantly through the use of QuerySets without needing to call save at any point? If not, I'd like to make the solution more generic: how can you get a list of the foreign keys on a model instance or do you have a better generic solution to my problem? Please correct me if you think I don't understand why this is happening.

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Problem wth paging ObjectDataSource

    - by funky
    asp page code: <asp:ObjectDataSource runat="server" ID="odsResults" OnSelecting="odsResults_Selecting" /> <tr><td> <wssawc:SPGridViewPager ID="sgvpPagerTop" runat="server" GridViewId="sgvConversionResults" /> </td></tr> <tr> <td colspan="2" class="ms-vb"> <wssawc:SPGridView runat="server" ID="sgvConversionResults" AutoGenerateColumns="false" RowStyle-CssClass="" AlternatingRowStyle-CssClass="ms-alternating" /> </td> </tr> Class code: public partial class Convert : System.Web.UI.Page { ... private DataTable resultDataSource = new DataTable(); ... protected void Page_Init(object sender, EventArgs e) { ... resultDataSource.Columns.Add("Column1"); resultDataSource.Columns.Add("Column2"); resultDataSource.Columns.Add("Column3"); resultDataSource.Columns.Add("Column4"); ... odsResults.TypeName = GetType().AssemblyQualifiedName; odsResults.SelectMethod = "SelectData"; odsResults.SelectCountMethod = "GetRecordCount"; odsResults.EnablePaging = true; sgvConversionResults.DataSourceID = odsResults.ID; ConversionResultsCreateColumns(); sgvConversionResults.AllowPaging = true; ... } protected void btnBTN_Click(object sender, EventArgs e) { // add rows into resultDataSource } public DataTable SelectData(DataTable ds,int startRowIndex,int maximumRows) { DataTable dt = new DataTable(); dt.Columns.Add("Column1"); dt.Columns.Add("Column2"); dt.Columns.Add("Column3"); dt.Columns.Add("Column4"); for (int i =startRowIndex; i<startRowIndex+10 ;i++) { if (i<ds.Rows.Count) { dt.Rows.Add(ds.Rows[i][0].ToString(), ds.Rows[i][1].ToString(), ds.Rows[i][2].ToString(), ds.Rows[i][3].ToString()); } } return dt; } public int GetRecordCount(DataTable ds) { return ds.Rows.Count; } protected void odsResults_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["ds"] = resultDataSource; } } On click BTN button resultDataSource receive some rows. Page reload and we can see result in sgvConversionResults. First 10 rows. But after click next page in pager we have message "There are no items to show in this view". When I try debug I find that after postBack page (on click next page) input params "ds" is blank, ds.Rows.Count = 0 and etc... As though resultDataSource became empty(( What do I do not correctly?. Sorry my English.

    Read the article

  • I am wondering how the Plural-Field generic is to be rendered in the REST OpenSocial 1.0 API specifi

    - by DaveGrahamOrg
    In the OpenSocial Data specificaiton 1.0 for a Person object (social profile data) it includes the use of a generic called Plural-Field. The spec can be found at: http://opensocial-resources.googlecode.com/svn/spec/1.0/Social-Data.xml#Person In the 1.0 data specification there is no XSD and no examples showing the use of this generic Plural-Field. After puzzeling the spec for some time I think I might understand the user of this generic. I was hoping that someone could confirm or correct my understanding. For example the accounts field is a generic Plural-Field<Account> while the activities field is Plural-Field <string>. Am I right in assuming that the result XML would look like: <accounts> <Plural-Field> <primary>true</primary> <type>ntlm</type> <value> <Account> <domain>MYDOMAIN</domain> <userid>MYDOMAIN\davegraham</userid> <username>davegraham</username> </Account> </value> </Plural-Field> <Plural-Field> <primary>false</primary> <type>claims</type> <value> <Account> <domain>i:0#.f|claimsDomain</domain> <userid>i:0#.f|claimsDomain|davegraham</userid> <username>davegraham</username> </Account> </value> </Plural-Field> </accounts> <activities> <Plural-Field> <primary>true</primary> <type>ntlm</type> <value>cycling</value> </Plural-Field> <Plural-Field> <primary>false</primary> <type>claims</type> <value>swiming</value> </Plural-Field> </activities> Am I right in my interpretation of the spec?

    Read the article

  • JQuery autocomplete: is not working asp.net

    - by Abu Hamzah
    is that something wrong in the below code? its not firing at all edit <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="HostPage.aspx.cs" Inherits="HostPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <link rel="stylesheet" href="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css" type="text/css" /> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script> </head> <body> <form id="form1" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#<%=txtHost.UniqueID %>").autocomplete("HostService.asmx/GetHosts", { dataType: 'json' , contentType: "application/json; charset=utf-8" , parse: function(data) { var rows = Array(); debugger for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].LName, result: data[i].LName }; } return rows; } , formatItem: function(row, i, max) { return data.LName + ", " + data.FName; } }); }); </script> <div> <asp:Label runat="server" ID='Label4' >Host Name:</asp:Label> <asp:TextBox ID="txtHost" runat='server'></asp:TextBox> <p> </div> </form> </body> </html>

    Read the article

  • Structs, strtok, segmentation fault

    - by FILIaS
    I'm trying to make a programme with structs and files.The following is just a part of my code(it;s not all). What i'm trying to do is: ask the user to write his command. eg. delete John eg. enter John James 5000 ipad purchase. The problem is that I want to split the command in order to save its 'args' for a struct element. That's why i used strtok. BUT I'm facing another problem in who to 'put' these on the struct. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX 100 char command[1500]; struct catalogue { char short_name[50]; char surname[50]; signed int amount; char description[1000]; }*catalog[MAX]; int main ( int argc, char *argv[] ) { int i,n; char choice[3]; printf(">sort1: Print savings sorted by surname\n"); printf(">sort2: Print savings sorted by amount\n"); printf(">search+name:Print savings of each name searched\n"); printf(">delete+full_name+amount: Erase saving\n"); printf(">enter+full_name+amount+description: Enter saving \n"); printf(">quit: Update + EXIT program.\n"); printf("Choose your selection:\n>"); gets(command); //it save the whole command /*in choice it;s saved only the first 2 letters(needed for menu choice again)*/ strncpy(choice,command,2); choice[2]='\0'; char** args = (char**)malloc(strlen(command)*sizeof(char*)); memset(args, 0, sizeof(char*)*strlen(command)); char* curToken = strtok(command, " \t"); for (n = 0; curToken != NULL; ++n) { args[n] = strdup(curToken); curToken = strtok(NULL, " \t"); *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; catalog[n]->amount=atoi(args[3]); *catalog[n]->description=args[4]; } return 0; } I get a warning (warning: assignment makes integer from pointer without a cast) for the lines: *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; *catalog[n]->description=args[4]; As a result, after running the program i get a Segmentation Fault... Any help? Any ideas?

    Read the article

  • Why does Python's math.factorial not play nice with threads?

    - by W1N9Zr0
    Why does math.factorial act so weird in a thread? Here is an example, it creates three threads: thread that just sleeps for a while thread that increments an int for a while thread that does math.factorial on a large number. It calls start on the threads, then join with a timeout The sleep and spin threads work as expected and return from start right away, and then sit in the join for the timeout. The factorial thread on the other hand does not return from start until it runs to the end! import sys from threading import Thread from time import sleep, time from math import factorial # Helper class that stores a start time to compare to class timed_thread(Thread): def __init__(self, time_start): Thread.__init__(self) self.time_start = time_start # Thread that just executes sleep() class sleep_thread(timed_thread): def run(self): sleep(15) print "st DONE:\t%f" % (time() - time_start) # Thread that increments a number for a while class spin_thread(timed_thread): def run(self): x = 1 while x < 120000000: x += 1 print "sp DONE:\t%f" % (time() - time_start) # Thread that calls math.factorial with a large number class factorial_thread(timed_thread): def run(self): factorial(50000) print "ft DONE:\t%f" % (time() - time_start) # the tests print print "sleep_thread test" time_start = time() st = sleep_thread(time_start) st.start() print "st.start:\t%f" % (time() - time_start) st.join(2) print "st.join:\t%f" % (time() - time_start) print "sleep alive:\t%r" % st.isAlive() print print "spin_thread test" time_start = time() sp = spin_thread(time_start) sp.start() print "sp.start:\t%f" % (time() - time_start) sp.join(2) print "sp.join:\t%f" % (time() - time_start) print "sp alive:\t%r" % sp.isAlive() print print "factorial_thread test" time_start = time() ft = factorial_thread(time_start) ft.start() print "ft.start:\t%f" % (time() - time_start) ft.join(2) print "ft.join:\t%f" % (time() - time_start) print "ft alive:\t%r" % ft.isAlive() And here is the output on Python 2.6.5 on CentOS x64: sleep_thread test st.start: 0.000675 st.join: 2.006963 sleep alive: True spin_thread test sp.start: 0.000595 sp.join: 2.010066 sp alive: True factorial_thread test ft DONE: 4.475453 ft.start: 4.475589 ft.join: 4.475615 ft alive: False st DONE: 10.994519 sp DONE: 12.054668 I've tried this on python 2.6.5 on CentOS x64, 2.7.2 on Windows x86 and the factorial thread does not return from start on either of them until the thread is done executing. I've also tried this with PyPy 1.8.0 on Windows x86, and there result is slightly different. The start does return immediately, but then the join doesn't time out! sleep_thread test st.start: 0.001000 st.join: 2.001000 sleep alive: True spin_thread test sp.start: 0.000000 sp DONE: 0.197000 sp.join: 0.236000 sp alive: False factorial_thread test ft.start: 0.032000 ft DONE: 9.011000 ft.join: 9.012000 ft alive: False st DONE: 12.763000

    Read the article

  • Best Practice: Legitimate Cross-Site Scripting

    - by Ryan
    While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary. I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple barebones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong. Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought "iFrame" - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content, so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame, I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page. In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript. A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors. My solution was as follows: My script received a GET parameter ("page") and then looked for the file ({$page}.php), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters ("\n") followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template. While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain?

    Read the article

  • "Error #1006: getAttributeByQName is not a function." Flex 2.0.1 hotfix 2

    - by Deveti Putnik
    Hi, guys! I am working on some old Flex project (Flex 2.0.1 hotfix 2) and I am rookie in Flex programming. So, I wrote code for accessing some ASP.NET web service: <?xml version="1.0" encoding="utf-8"?> [Bindable] public var users:ArrayOfUser; private function buttonClicked():void { mx.controls.Alert.show(dataService.wsdl); dataService.UserGetAll.send();/ } public function dataHandler(event:ResultEvent):void { Alert.show("alo"); var response:ResponseUsers = event.result as ResponseUsers; if (response.responseCode != ResponseCodes.SUCCESS) { mx.controls.Alert.show("Error: " + response.responseCode.toString()); return; } users = response.users; } ]]> <mx:Button label="Click me!" click="buttonClicked()"/> And this is what I get from debugger: WSDL loaded Invoking SOAP operation UserGetAll Encoding SOAP request envelope Encoding SOAP request body 'A97A2DC1-AEDA-C594-45D2-1BA2B0F3B223' producer sending message '10681130-43E7-3DA7-34DD-1BA2B85545E3' 'direct_http_channel' channel sending message: (mx.messaging.messages::SOAPMessage)#0 body = "<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SOAP-ENV:Body> <tns:UserGetAll xmlns:tns="http://tempuri.org/"/> </SOAP-ENV:Body> </SOAP-ENV:Envelope>" clientId = "DirectHTTPChannel0" contentType = "text/xml; charset=utf-8" destination = "DefaultHTTP" headers = (Object)#1 httpHeaders = (Object)#2 SOAPAction = ""http://tempuri.org/UserGetAll"" messageId = "10681130-43E7-3DA7-34DD-1BA2B85545E3" method = "POST" recordHeaders = false timestamp = 0 timeToLive = 0 url = "http://192.168.0.201:8123/Service.asmx" 'A97A2DC1-AEDA-C594-45D2-1BA2B0F3B223' producer acknowledge of '10681130-43E7-3DA7-34DD-1BA2B85545E3'. Decoding SOAP response Encoded SOAP response <?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><UserGetAllResponse xmlns="http://tempuri.org/"><UserGetAllResult><ResponseCode>Success</ResponseCode><Users><User><Id>1</Id><Name>test</Name><Key>testKey</Key><IsActive>true</IsActive><Name>Petar i Sofija</Name><Key>123789</Key><IsActive>true</IsActive></User></Users></UserGetAllResult></UserGetAllResponse></soap:Body></soap:Envelope> Decoding SOAP response envelope Decoding SOAP response body And finally I get this error "Error #1006: getAttributeByQName is not a function.". As you can see, I get correct response from web service, but dataHandler function is never got called. Can anyone please help me out? Thanks, Deveti Putnik

    Read the article

  • A python random function acts differently when assigned to a list or called directly...

    - by Dror Hilman
    I have a python function that randomize a dictionary representing a position specific scoring matrix. for example: mat = { 'A' : [ 0.53, 0.66, 0.67, 0.05, 0.01, 0.86, 0.03, 0.97, 0.33, 0.41, 0.26 ] 'C' : [ 0.14, 0.04, 0.13, 0.92, 0.99, 0.04, 0.94, 0.00, 0.07, 0.23, 0.35 ] 'T' : [ 0.25, 0.07, 0.01, 0.01, 0.00, 0.04, 0.00, 0.03, 0.06, 0.12, 0.14 ] 'G' : [ 0.08, 0.23, 0.20, 0.02, 0.00, 0.06, 0.04, 0.00, 0.54, 0.24, 0.25 ] } The scambling function: def scramble_matrix(matrix, iterations): mat_len = len(matrix["A"]) pos1 = pos2 = 0 for count in range(iterations): pos1,pos2 = random.sample(range(mat_len), 2) #suffle the matrix: for nuc in matrix.keys(): matrix[nuc][pos1],matrix[nuc][pos2] = matrix[nuc][pos2],matrix[nuc][pos1] return matrix def print_matrix(matrix): for nuc in matrix.keys(): print nuc+"[", for count in matrix[nuc]: print "%.2f"%count, print "]" now to the problem... When I try to scramble a matrix directly, It's works fine: print_matrix(mat) print "" print_matrix(scramble_matrix(mat,10)) gives: A[ 0.53 0.66 0.67 0.05 0.01 0.86 0.03 0.97 0.33 0.41 0.26 ] C[ 0.14 0.04 0.13 0.92 0.99 0.04 0.94 0.00 0.07 0.23 0.35 ] T[ 0.25 0.07 0.01 0.01 0.00 0.04 0.00 0.03 0.06 0.12 0.14 ] G[ 0.08 0.23 0.20 0.02 0.00 0.06 0.04 0.00 0.54 0.24 0.25 ] A[ 0.41 0.97 0.03 0.86 0.53 0.66 0.33.05 0.67 0.26 0.01 ] C[ 0.23 0.00 0.94 0.04 0.14 0.04 0.07 0.92 0.13 0.35 0.99 ] T[ 0.12 0.03 0.00 0.04 0.25 0.07 0.06 0.01 0.01 0.14 0.00 ] G[ 0.24 0.00 0.04 0.06 0.08 0.23 0.54 0.02 0.20 0.25 0.00 ] but when I try to assign this scrambling to a list , it does not work!!! ... print_matrix(mat) s=[] for x in range(3): s.append(scramble_matrix(mat,10)) for matrix in s: print "" print_matrix(matrix) result: A[ 0.53 0.66 0.67 0.05 0.01 0.86 0.03 0.97 0.33 0.41 0.26 ] C[ 0.14 0.04 0.13 0.92 0.99 0.04 0.94 0.00 0.07 0.23 0.35 ] T[ 0.25 0.07 0.01 0.01 0.00 0.04 0.00 0.03 0.06 0.12 0.14 ] G[ 0.08 0.23 0.20 0.02 0.00 0.06 0.04 0.00 0.54 0.24 0.25 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] What is the problem??? Why the scrambling do not work after the first time, and all the list filled with the same matrix?!

    Read the article

  • Unable to execute stored Procedure using Java and JDBC

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • Selecting MediaTray in Java printing

    - by Rocket Surgeon
    I am trying to programmatically select a different media tray using Java Printing API. However, my document always gets printed to the default (TOP) media tray. I checked if the MediaTray attributes are supported using "isAttributeValueSupported()" method on javax.print.PrintService interface and I am getting the result as "true" for each MediaTray I pass. Here is my code: public void print( String printerName, com.company.services.document.transferobject.MediaTray tray, byte[] document) { String methodName = "print: "; logger.sendEvent(CLASS_NAME + methodName + "Start", EventType.INFO, this); if (printerName == null || "none".equals(printerName) || "?".equals(printerName) || "null".equals(printerName)) { logger.sendEvent("Please supply printer name, currently printerName is "+printerName, EventType.INFO, this); return; } DocFlavor flavor = DocFlavor.BYTE_ARRAY.AUTOSENSE; AttributeSet attributeSet = new HashAttributeSet(); attributeSet.add(new PrinterName(printerName, null)); javax.print.PrintService service = getService(printerName); if (service.isAttributeValueSupported(MediaTray.TOP, flavor, null)) { System.out.println("---------->>>>>>>>>Yes TOP" + " : Value : " + MediaTray.TOP.getValue()); } else { System.out.println("---------->>>>>>>>>Nope"); } if (service.isAttributeValueSupported(MediaTray.BOTTOM, flavor, null)) { System.out.println("---------->>>>>>>>>Yes BOTTOM" + " : Value : " + MediaTray.BOTTOM.getValue()); } else { System.out.println("---------->>>>>>>>>Nope"); } if (service.isAttributeValueSupported(MediaTray.MIDDLE, flavor, null)) { System.out.println("---------->>>>>>>>>Yes MIDDLE" + " : Value : " + MediaTray.MIDDLE.getValue()); } else { System.out.println("---------->>>>>>>>>Nope"); } if (service.isAttributeValueSupported(MediaTray.MANUAL, flavor, null)) { System.out.println("---------->>>>>>>>>Yes MANUAL" + " : Value : " + MediaTray.MANUAL.getValue()); } else { System.out.println("---------->>>>>>>>>Nope"); } if (service.isAttributeValueSupported(MediaTray.SIDE, flavor, null)) { System.out.println("---------->>>>>>>>>Yes SIDE" + " : Value : " + MediaTray.SIDE.getValue()); } else { System.out.println("---------->>>>>>>>>Nope"); } DocPrintJob printJob = service.createPrintJob(); try { byte[] textStream = document; PrintRequestAttributeSet pras = new HashPrintRequestAttributeSet(); pras.add(DocumentServiceConstant. DEFAULT_ONE_PRINT_COPY); pras.add(Sides.ONE_SIDED); Media standardTray= toStandardTray(tray); if (null != standardTray) { pras.add(standardTray); } Doc myDoc = new SimpleDoc(textStream, flavor, null); printJob.print(myDoc, pras); logger.sendEvent( " successfully printed ............ ", EventType.INFO, this); } catch (Throwable th) { logger.sendEvent(" Throwable : "+th.getLocalizedMessage(), EventType.INFO, this); ExceptionUtility .determineExceptionForServiceClient(th); } logger.sendEvent(CLASS_NAME + methodName + "END: ", EventType.INFO, this); } Any help will be greatly appreciated!

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • Bullet indents in PowerPoint 2007 compatibility mode via .NET interop issue

    - by L. Shaydariv
    Hello. I've got a really difficult bug and I can't see the fix. The subject drives me insane for real for a long time. Let's consider the following scenario: 1) There is a PowerPoint 2003 presentation. It contains the only slide and the only shape, but the shape contains a text frame including a bulleted list with a random textual representation structure. 2) There is a requirement to get bullet indents for every bulletted paragraph using PowerPoint 2007. I can satisfy the requirement opening the presentation in the compatibility mode and applying the following VBA script: With ActivePresentation Dim sl As Slide: Set sl = .Slides(1) Dim sh As Shape: Set sh = sl.Shapes(1) Dim i As Integer For i = 1 To sh.TextFrame.TextRange.Paragraphs.Count Dim para As TextRange: Set para = sh.TextFrame.TextRange.Paragraphs(i, 1) Debug.Print para.Text; para.indentLevel, sh.TextFrame.Ruler.Levels(para.indentLevel).FirstMargin Next i End With that produces the following output: A 1 0 B 1 0 C 2 24 D 3 60 E 5 132 Obviously, everything is perfect indeed: it has shown the proper list item text, list item level and its bullet indent. But I can't see the way of how I can reach the same result using C#. Let's add a COM-reference to Microsoft.Office.Interop.PowerPoint 2.9.0.0 (taken from MSPPT.OLB, MS Office 12): // presentation = ...("presentation.ppt")... // a PowerPoint 2003 presentation Slide slide = presentation.Slides[1]; Shape shape = slide.Shapes[1]; for (int i = 1; i<=shape.TextFrame.TextRange.Paragraphs(-1, -1).Count; i++) { TextRange paragraph = shape.TextFrame.TextRange.Paragraphs(i, 1); Console.WriteLine("{0} {1} {2}", paragraph.Text, paragraph.IndentLevel, shape.TextFrame.Ruler.Levels[paragraph.IndentLevel].FirstMargin); } Oh, man... What's it? I've got problems here. First, the paragraph.Text value is trimmed until the '\r' character is found (however paragraph.Text[0] really returns the first character O_o). But it's ok, I can shut my eyes to this. But... But, second, I can't understand why the first margins are always zero and it does not matter which level they belong to. They are always zero in the compatibility mode... It's hard to believe it... :) So is there any way to fix it or just to find a workaround? I'd like to accept any help regarding to the solution of the subject. I can't even find any article related to the issue. :( Probably you have ever been face to face with it... Or is it just a bug with no fix and must it be reported to Microsoft? Thanks you.

    Read the article

  • Python multiprocessing global variable updates not returned to parent

    - by user1459256
    I am trying to return values from subprocesses but these values are unfortunately unpicklable. So I used global variables in threads module with success but have not been able to retrieve updates done in subprocesses when using multiprocessing module. I hope I'm missing something. The results printed at the end are always the same as initial values given the vars dataDV03 and dataDV04. The subprocesses are updating these global variables but these global variables remain unchanged in the parent. import multiprocessing # NOT ABLE to get python to return values in passed variables. ants = ['DV03', 'DV04'] dataDV03 = ['', ''] dataDV04 = {'driver': '', 'status': ''} def getDV03CclDrivers(lib): # call global variable global dataDV03 dataDV03[1] = 1 dataDV03[0] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV03" )' ) these are unpicklable instantiations def getDV04CclDrivers(lib, dataDV04): # pass global variable dataDV04['driver'] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV04" )' ) if __name__ == "__main__": jobs = [] if 'DV03' in ants: j = multiprocessing.Process(target=getDV03CclDrivers, args=('LORR',)) jobs.append(j) if 'DV04' in ants: j = multiprocessing.Process(target=getDV04CclDrivers, args=('LORR', dataDV04)) jobs.append(j) for j in jobs: j.start() for j in jobs: j.join() print 'Results:\n' print 'DV03', dataDV03 print 'DV04', dataDV04 I cannot post to my question so will try to edit the original. Here is the object that is not picklable: In [1]: from CCL import LORR In [2]: lorr=LORR.LORR('DV20', None) In [3]: lorr Out[3]: <CCL.LORR.LORR instance at 0x94b188c> This is the error returned when I use a multiprocessing.Pool to return the instance back to the parent: Thread getCcl (('DV20', 'LORR'),) Process PoolWorker-1: Traceback (most recent call last): File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap self.run() File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/pool.py", line 71, in worker put((job, i, result)) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/queues.py", line 366, in put return send(obj) UnpickleableError: Cannot pickle <type 'thread.lock'> objects In [5]: dir(lorr) Out[5]: ['GET_AMBIENT_TEMPERATURE', 'GET_CAN_ERROR', 'GET_CAN_ERROR_COUNT', 'GET_CHANNEL_NUMBER', 'GET_COUNT_PER_C_OP', 'GET_COUNT_REMAINING_OP', 'GET_DCM_LOCKED', 'GET_EFC_125_MHZ', 'GET_EFC_COMB_LINE_PLL', 'GET_ERROR_CODE_LAST_CAN_ERROR', 'GET_INTERNAL_SLAVE_ERROR_CODE', 'GET_MAGNITUDE_CELSIUS_OP', 'GET_MAJOR_REV_LEVEL', 'GET_MINOR_REV_LEVEL', 'GET_MODULE_CODES_CDAY', 'GET_MODULE_CODES_CMONTH', 'GET_MODULE_CODES_DIG1', 'GET_MODULE_CODES_DIG2', 'GET_MODULE_CODES_DIG4', 'GET_MODULE_CODES_DIG6', 'GET_MODULE_CODES_SERIAL', 'GET_MODULE_CODES_VERSION_MAJOR', 'GET_MODULE_CODES_VERSION_MINOR', 'GET_MODULE_CODES_YEAR', 'GET_NODE_ADDRESS', 'GET_OPTICAL_POWER_OFF', 'GET_OUTPUT_125MHZ_LOCKED', 'GET_OUTPUT_2GHZ_LOCKED', 'GET_PATCH_LEVEL', 'GET_POWER_SUPPLY_12V_NOT_OK', 'GET_POWER_SUPPLY_15V_NOT_OK', 'GET_PROTOCOL_MAJOR_REV_LEVEL', 'GET_PROTOCOL_MINOR_REV_LEVEL', 'GET_PROTOCOL_PATCH_LEVEL', 'GET_PROTOCOL_REV_LEVEL', 'GET_PWR_125_MHZ', 'GET_PWR_25_MHZ', 'GET_PWR_2_GHZ', 'GET_READ_MODULE_CODES', 'GET_RX_OPT_PWR', 'GET_SERIAL_NUMBER', 'GET_SIGN_OP', 'GET_STATUS', 'GET_SW_REV_LEVEL', 'GET_TE_LENGTH', 'GET_TE_LONG_FLAG_SET', 'GET_TE_OFFSET_COUNTER', 'GET_TE_SHORT_FLAG_SET', 'GET_TRANS_NUM', 'GET_VDC_12', 'GET_VDC_15', 'GET_VDC_7', 'GET_VDC_MINUS_7', 'SET_CLEAR_FLAGS', 'SET_FPGA_LOGIC_RESET', 'SET_RESET_AMBSI', 'SET_RESET_DEVICE', 'SET_RESYNC_TE', 'STATUS', '_HardwareDevice__componentName', '_HardwareDevice__hw', '_HardwareDevice__stickyFlag', '_LORRBase__logger', '__del__', '__doc__', '__init__', '__module__', '_devices', 'clearDeviceCommunicationErrorAlarm', 'getControlList', 'getDeviceCommunicationErrorCounter', 'getErrorMessage', 'getHwState', 'getInternalSlaveCanErrorMsg', 'getLastCanErrorMsg', 'getMonitorList', 'hwConfigure', 'hwDiagnostic', 'hwInitialize', 'hwOperational', 'hwSimulation', 'hwStart', 'hwStop', 'inErrorState', 'isMonitoring', 'isSimulated'] In [6]:

    Read the article

  • How to send hashed data in SOAP request body?

    - by understack
    I want to imitate following request using Zend_Soap_Client for this wsdl. <SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:clr="http://schemas.microsoft.com/soap/encoding/clr/1.0" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Header> <h3:__MethodSignature xsi:type="SOAP-ENC:methodSignature" xmlns:h3="http://schemas.microsoft.com/clr/soap/messageProperties" SOAP-ENC:root="1" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections">xsd:string a2:Hashtable</h3:__MethodSignature> </SOAP-ENV:Header> <SOAP-ENV:Body> <i4:ReturnDataSet id="ref-1" xmlns:i4="http://schemas.microsoft.com/clr/nsassem/Interface.IRptSchedule/Interface"> <sProc id="ref-5">BU</sProc> <ht href="#ref-6"/> </i4:ReturnDataSet><br/> <a2:Hashtable id="ref-6" xmlns:a2="http://schemas.microsoft.com/clr/ns/System.Collections"> <LoadFactor>0.72</LoadFactor> <Version>1</Version> <Comparer xsi:null="1"/> <HashCodeProvider xsi:null="1"/> <HashSize>11</HashSize> <Keys href="#ref-7"/> <Values href="#ref-8"/> </a2:Hashtable> <SOAP-ENC:Array id="ref-7" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-9" xsi:type="SOAP-ENC:string">@AppName</item> </SOAP-ENC:Array><br/> <SOAP-ENC:Array id="ref-8" SOAP-ENC:arrayType="xsd:anyType[1]"> <item id="ref-10" xsi:type="SOAP-ENC:string">AAGENT</item> </SOAP-ENC:Array> </SOAP-ENV:Body> </SOAP-ENV:Envelope> It seems somehow I've to send hashed "ref-7" and "ref-8" array embedded inside body? How can I do this? Function ReturnDataSet takes two parameters, how can I send additional "ref-7" and "ref-8" array data? $client = new SoapClient($wsdl_url, array('soap_version' => SOAP_1_1)); $result = $client->ReturnDataset("BU", $ht); I don't know how to set $ht, so that hashed data is sent as different body entry. Thanks.

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Scala actors: receive vs react

    - by jqno
    Let me first say that I have quite a lot of Java experience, but have only recently become interested in functional languages. Recently I've started looking at Scala, which seems like a very nice language. However, I've been reading about Scala's Actor framework in Programming in Scala, and there's one thing I don't understand. In chapter 30.4 it says that using react instead of receive makes it possible to re-use threads, which is good for performance, since threads are expensive in the JVM. Does this mean that, as long as I remember to call react instead of receive, I can start as many Actors as I like? Before discovering Scala, I've been playing with Erlang, and the author of Programming Erlang boasts about spawning over 200,000 processes without breaking a sweat. I'd hate to do that with Java threads. What kind of limits am I looking at in Scala as compared to Erlang (and Java)? Also, how does this thread re-use work in Scala? Let's assume, for simplicity, that I have only one thread. Will all the actors that I start run sequentially in this thread, or will some sort of task-switching take place? For example, if I start two actors that ping-pong messages to each other, will I risk deadlock if they're started in the same thread? According to Programming in Scala, writing actors to use react is more difficult than with receive. This sounds plausible, since react doesn't return. However, the book goes on to show how you can put a react inside a loop using Actor.loop. As a result, you get loop { react { ... } } which, to me, seems pretty similar to while (true) { receive { ... } } which is used earlier in the book. Still, the book says that "in practice, programs will need at least a few receive's". So what am I missing here? What can receive do that react cannot, besides return? And why do I care? Finally, coming to the core of what I don't understand: the book keeps mentioning how using react makes it possible to discard the call stack to re-use the thread. How does that work? Why is it necessary to discard the call stack? And why can the call stack be discarded when a function terminates by throwing an exception (react), but not when it terminates by returning (receive)? I have the impression that Programming in Scala has been glossing over some of the key issues here, which is a shame, because otherwise it's a truly excellent book.

    Read the article

  • Cocoa NSOutputStream send to a connection

    - by Chuck
    Hi, I am new to Cocoa, but managed to get a connection (to a FTP) up and running, and I've set up an eventhandler for the NSInputStream iStream to alert every response (which also works). What I manage to get is simply the hello message and a connection timeout 60 sec, closing control connection. After searching stackoverflow and finding a lot of NSOutputStream write problems (e.g. http://stackoverflow.com/questions/703729/how-to-use-nsoutputstreams-write-message) and a lot of confusion in my google hits, I figured I'd try to ask my own question: I've tried reading the developer.apple.com doc on OutputStream, but it seems almost impossible for me to send some data (in this case just a string) to the "connection" via the NSOutputStream oStream. - (IBAction) send_something: sender { const char *send_command_char = [@"USER foo" UTF8String]; send_command_buffer = [NSMutableData dataWithBytes:send_command_char length:strlen(send_command_char) + 1]; uint8_t *readBytes = (uint8_t *)[send_command_buffer mutableBytes]; NSInteger byteIndex = 0; readBytes += byteIndex; int data_len = [send_command_buffer length]; unsigned int len = ((data_len - byteIndex >= 1024) ? 1024 : (data_len-byteIndex)); uint8_t buf[len]; (void)memcpy(buf, readBytes, len); len = [oStream write:(const uint8_t *)buf maxLength:len]; byteIndex += len; } the above seems not to result in any useable events. typing it under NSStreamEventHasSpaceAvailable sometimes give a response if I spam the ftp by keep creating new connection instances and keep sending some command whenever oStream has free space. In other words, nothing "right" and so I'm still unclear how to properly send a command to the connection. Should I open - write - close every time i want to write to oStream (and thus to the ftp) and can I then expect a reply (hasBytesAvailable event on iStream)? For some reason I find it very difficult to find any clear tutorials on this matter. Seems like there are more than a few in the same position as me: unclear how to use oStream write? Any little bit that can help clear this up is greatly appreciated! If needed I can write the rest of the code. Chuck

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >