Search Results

Search found 63386 results on 2536 pages for 'data structure'.

Page 305/2536 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • 'NoneType' object has no attribute 'data'

    - by Bill Jordan
    Hello guys, I am sending a SOAP request to my server and getting the response back. sample of the response string is shown below: <?xml version = '1.0' ?> <env:Envelope xmlns:env=http:////www.w3.org/2003/05/soap-envelop . .. .. <env:Body> <epas:get-all-config-resp xmlns:epas="urn:organization:epas:soap"> ^M ... ... <epas:property name="Tom">12</epas:property> > > <epas:property name="Alice">34</epas:property> > > <epas:property name="John">56</epas:property> > > <epas:property name="Danial">78</epas:property> > > <epas:property name="George">90</epas:property> > > <epas:property name="Luise">11</epas:property> ... ^M </env:Body? </env:Envelop> What I noticed in the response is that there is an extra character shown in the body which is "^M". Not sure if this could be the issue. Note the ^M shown! when I tried parsing the string returned from the server to get the names and values using the code sample: elements = minidom.parseString(xmldoc).getElementsByTagName("property") myDict = {} for element in elements: myDict[element.getAttribute('name')] = element.firstChild.data But, I am getting this error: 'NoneType' object has no attribute 'data'. May be its something to do with the "^M" shown on the xml response back! Any ideas/comments would be appreciated, Cheers

    Read the article

  • Reading data from an open HTTP stream

    - by allenjones
    Hi, I am trying to use the .NET WebRequest/WebResponse classes to access the Twitter streaming API here "http://stream.twitter.com/spritzer.json". I need to be able to open the connection and read data incrementally from the open connection. Currently, when I call WebRequest.GetResponse method, it blocks until the entire response is downloaded. I know there is a BeginGetResponse method, but this will just do the same thing on a background thread. I need to get access to the response stream while the download is still happening. This just does not seem possible to me with these classes. There is a specific comment about this in the Twitter documentation: "Please note that some HTTP client libraries only return the response body after the connection has been closed by the server. These clients will not work for accessing the Streaming API. You must use an HTTP client that will return response data incrementally. Most robust HTTP client libraries will provide this functionality. The Apache HttpClient will handle this use case, for example." They point to the Appache HttpClient, but that doesn't help much because I need to use .NET. Any ideas whether this is possible with WebRequest/WebResponse, or do I have to go for lower level networking classes? Maybe there are other libraries that will allow me to do this? Thx Allen

    Read the article

  • Data of the web services in a RSS

    - by vymz
    I only have the .wsdl and I want to put the data that the web services return in the RSS. And in a web page of SAP, I can only upload RSS, so I need put the information of the web services in the RSS For example, I put the information manually(name and total value) in the fields <title> and <descripcion>, these data are extracted from the web services. But sometimes I don´t know how much information brings the web service. Also, I know that RSS is not to store information such as web services. <?xml version="1.0" encoding="UTF-8" ?> <rss version="2.0"> <channel> <title>Test RSS</title> <link>http://solutions.com</link> <description>RSS</description> <item> <title>Luiz</title> <link>http://www.solutions.com/prueba1</link> <description>10</description> </item> <item> <title>Clodoaldo</title> <link>http://www.solutions.com/prueba2</link> <description>5</description> </item>

    Read the article

  • Using data binding on value which is a FrameworkElement

    - by JaredPar
    One of my data sources produces a collection of values which are typed to the following interface public interface IData { string Name { get; } FrameworkElement VisualElement { get; } } I'd like to use data binding in WPF to display a collection of IData instances in a TabControl where the Name value becomes the header of the tab and the VisualElement value is displayed as the content of the corresponding tab. Binding the header is straight forward. I'm stuck though on how to define a template which allows me to display the VisualElement value. I've tried a number of solutions with little success. My best attempt is as follows. <TabControl ItemsSource="{Binding}"> <TabControl.ItemTemplate> <DataTemplate> <Label Content="{Binding Name}"/> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> How do I display VisualElement here? </DataTemplate> </TabControl.ContentTemplate> </TabControl> I'm still very new to WPF so I could be missing the obvious here.

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • Handling data update/freshness issue in web-app in general (or GWT specifically)

    - by edwin.nathaniel
    In general, how do you guys handle user update/data freshness interaction with the user (UI issue) in web-apps? For example: Multi-users web-app (like project management) Login to a "virtual" space People can update project names, etc How to handle a situation such that: user-A and user-B load a project with title "Project StackOverflow" user-B updates the title to "Project StackExchange" user-A updates the title after user-B update operation to "Project Basecamp" The question I'm asking is from the user perspective (UI) and not about transactional operation. What do most people do in this situation? What would you do after user-B updates the title in user-A screen/view? What happened when user-A tries to update the title after user-B finished his/her update operation? do you inform user-A that the title has changed and he/she has to reload the page? do you go ahead and change the title and let user-B has old data? Do you do some sort of application-level "locking" mechanism? (if someone is updating, nobody else can?) Or fix the application workflow? (who has the access to be able to change things, etc). What would be the simplest solution, but at the same time not annoy the user with more dialog/warning messages. I've encountered this particular problem frequently in a GWT app specifically where domain models are being passed around and refreshing the whole app/client-side isn't the optimal solution in my mind (since it means the whole "loading"/initialization phase must be executed again in this specific environment). Maybe the answer is to stay away from GWT? :) Love to hear options, solutions, and advises from you guys. Thanks

    Read the article

  • Can't wrap my head around appengine data store persistence

    - by aloo
    Hi, I've run into the "can't operate on multiple entity groups in a single transaction." problem when using APPENGINE FOR JAVA w/ JDO with the following code: PersistenceManager pm = PMF.get().getPersistenceManager(); Query q = pm.newQuery("SELECT this FROM " + TypeA.class.getName() + " WHERE userId == userIdParam "); q.declareParameters("String userIdParam"); List<TypeA> poos = (List<TypeA>) q.execute(userIdParam); for (TypeA a : allTypeAs) { a.setSomeField(someValue); } pm.close(); } The problem it seems is that I can't operate on a multiple entities at the same time b/c they arent in the same entity group while in a transaction. Even though it doesn't seem like I'm in a transaction, appengine generates one because I have the following set in my jdoconfig.xml: <property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/> Fine. So far I think I understand. BUT - if I replace TypeA in the above code, with TypeB - I don't get the error. I don't believe there is anything different between type a and type b - they both have the same key structure. They do have different fields but that shouldn't matter, right? My question is - what could possible be different between TypeA and TypeB that they give this different behavior? And consequently what do you I fundamentally misunderstand that this behavior could even exist.... Thanks.

    Read the article

  • Access report not showing data

    - by Brian Smith
    I have two queries that I am using to generate a report from, the problem is when I run the report, three fields do not show any data at all for some reason. Query 1: SELECT ClientSummary.Field3 AS PM, ClientSummary.[Client Nickname 2] AS [Project #], ClientSummary.[Client Nickname 1] AS Customer, ClientSummary.[In Reference To] AS [Job Name], ClientSummary.Field10 AS Contract, (select sum([Billable Slip Value]) from Util_bydate as U1 where U1.[Client Nickname 2] = ClientSummary.[Client Nickname 2]) AS [This Week], (select sum([Billable Slip Value]) from Util as U2 where U2.[Client Nickname 2] = ClientSummary.[Client Nickname 2] ) AS [To Date], [To Date]/[Contract] AS [% Spent], 0 AS Backlog, ClientSummary.[Total Slip Fees & Costs] AS Billed, ClientSummary.Payments AS Paid, ClientSummary.[Total A/R] AS Receivable, [Forms]![ReportMenu]![StartDate] AS [Start Date], [Forms]![ReportMenu]![EndDate] AS [End Date] FROM ClientSummary; Query 2: SELECT JobManagement_Summary.pm, JobManagement_Summary.[project #], JobManagement_Summary.Customer, JobManagement_Summary.[Job Name], JobManagement_Summary.Contract, IIf(IsNull([This Week]),0,[This Week]) AS [N_This Week], IIf(IsNull([To Date]),0,[To Date]) AS [N_To Date], [% Spent], JobManagement_Summary.Backlog, JobManagement_Summary.Billed, JobManagement_Summary.Paid, JobManagement_Summary.Receivable, JobManagement_Summary.[Start Date], JobManagement_Summary.[End Date] FROM JobManagement_Summary; When I run the report from query 2 these 3 fields don't appear. N_This Week, N_To Date and % Spent. All have no data. It isn't the IIF functions, as it doesn't matter if I have those in there or remove them. Any thoughts? If I connect directly to the first recordset it works fine, but then SQL throws the error message: Multi-level GROUP BY cause not allowed in subquery. Is there any way to get around that message to link to it directly or does anyone have ANY clue why these fields are coming back blank? I am at wits end here!

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • How to find/extract data from xml with jQuery

    - by darryl
    I'm trying to extract the StateLongName and StateShortName values from the xml below. I know there has to be a simple elegant way to do this with jQuery. <NewDataSet> <Table> <StateLongName>Alabama</StateLongName> <StateShortName>AL</StateShortName> </Table> <Table> <StateLongName>Alaska</StateLongName> <StateShortName>AK</StateShortName> </Table> ...elments removed for brevity </NewDataSet> Here's what I've tried. Load the xml from above into a Javascript variable name xml. Try #1 $(xml).find("TABLE").each(function() { var stateName = $(this).find("StateLongName").innerText; var stateCode = $(this).find("StateShortName").innerText; }); Try #1 doesn't find anything and never goes inside to load the stateName and stateCode variables. Try #2 $(xml).find("StateLongName").each(function() { var stateName = $(this).find("StateLongName").innerText; var stateCode = $(this).find("StateShortName").innerText; }); Try #2 does find matches, however the stateName and stateCode are left undefined. Try #3 $(xml).find("StateLongName").each(function() { var stateName = $($(xml).find('StateLongName').parent()[0].innerHTML)[1].data; var stateCode = $($(xml).find('StateLongName').parent()[0].innerHTML)[5].data; }); Try #3 works but there has to be a better way. Please enlighten me. Thanks for you time!

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • Dynamic Control loading at wrong time?

    - by Telos
    This one is a little... odd. Basically I have a form I'm building using ASP.NET Dynamic Data, which is going to utilize several custom field templates. I've just added another field to the FormView, with it's own custom template, and the form is loading that control twice for no apparent reason. Worse yet, the first time it loads the template, the Row is not ready yet and I get the error message: {"Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control."} I'm accessing the Row variable in a LinqDataSource OnSelected event in order to get the child object... Now for the wierd part: If I reorder the fields a little, the one causing the problem no longer gets loaded twice. Any thoughts? EDIT: I've noticed that Page_Load gets called on the first load (when Row throws an exception if you try to use it) but does NOT get called the second time around. If that helps any... Right now managing it by just catching and ignoring the exception, but still a little worried that things will break if I don't find the real cause. EDIT 2: I've traced the problem to using FindControl recursively to find other controls on the page. Apparently FindControl can cause the page lifecycle events (at least up to page_load) to fire... and this occurs before that page "should" be loading so it's dynamic data "stuff" isn't ready yet.

    Read the article

  • Access User Meta Data on User Registration in Wordpress

    - by Shadi Almosri
    Hiya, I am attempting to carry out a few functions when a user registers on a wordpress site. I have created a module for this which carries out the following function: add_action( 'user_register', 'tml_new_user_registered' ); function tml_new_user_registered( $user_id ) { //wp_set_auth_cookie( $user_id, false, is_ssl() ); //wp_redirect( admin_url( 'profile.php' ) ); $user_info = get_userdata($user_id); $subscription_value = get_user_meta( $user_id, "subscribe_to_newsletter", TRUE); if($subscription_value == "Yes") { //include("Subscriber.Add.php"); } echo "<pre>: "; print_r($user_info); print_r($subscription_value); echo "</pre>"; exit; } But it seems that i am not able to access any user meta data as at the end of this stage none of it is stored. Any ideas how i execute a function once Wordpress has completed the whole registration process of adding meta data into the relevant tables too? I attempted to use this: add_filter('user_register ','tml_new_user_registered',99); But with no luck unfortunately. Thanks in advance!

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Have no idea with python-excel read data file

    - by Protoss Reed
    I am a student and haven't a big expirence to do this work. So problem is next. I have a part of code: import matplotlib.pyplot as plt from pylab import * import cmath def sf(prompt): """ """ error_message = "Value must be integer and greater or equal than zero" while True: val = raw_input(prompt) try: val = float(val) except ValueError: print(error_message) continue if val <= 0: print(error_message) continue return val def petrogen_elements(): """Input and calculations the main parameters for pertogen elements""" print "Please enter Petrogen elements: \r" SiO2 = sf("SiO2: ") Al2O3= sf("Al2O3: ") Na2O = sf("Na2O: ") K2O = sf("K2O: ") petro = [SiO2,TiO2,Al2O3,] Sum = sum(petro) Alcal = Na2O + K2O TypeA lcal= Na2O / K2O Ka= (Na2O + K2O)/ Al2O3 print '-'*20, "\r Alcal: %s \r TypeAlcal: %s \ \r Ka: %s \r" % (Alcal, TypeAlcal,Ka,) petrogen_elements() So the problem is next. I have to load and read excel file and read all data in it. After that program have to calculate for example Alcaline, Type of Alcaline etc. Excel file has only this structure 1 2 3 4 5   1 name1 SiO2 Al2O3 Na2O K2O 2 32 12 0.21 0.1 3 name2 SiO2 Al2O3 Na2O K2O 4 45 8 7.54 5 5 name3 SiO2 Al2O3 Na2O K2O 6. … …. …. … … … All excel file has only 5 columns and unlimited rows. User has choice input data or import excel file. First part of work I have done but it stays a big part Finally I need to read all file and calculate the values. I would be so grateful for some advice

    Read the article

  • Binary Search Tree for specific intent

    - by Luís Guilherme
    We all know there are plenty of self-balancing binary search trees (BST), being the most famous the Red-Black and the AVL. It might be useful to take a look at AA-trees and scapegoat trees too. I want to do deletions insertions and searches, like any other BST. However, it will be common to delete all values in a given range, or deleting whole subtrees. So: I want to insert, search, remove values in O(log n) (balanced tree). I would like to delete a subtree, keeping the whole tree balanced, in O(log n) (worst-case or amortized) It might be useful to delete several values in a row, before balancing the tree I will most often insert 2 values at once, however this is not a rule (just a tip in case there is a tree data structure that takes this into account) Is there a variant of AVL or RB that helps me on this? Scapegoat-trees look more like this, but would also need some changes, anyone who has got experience on them can share some thougts? More precisely, which balancing procedure and/or removal procedure would help me keep this actions time-efficient?

    Read the article

  • How to assign class property as display data memeber in datagridview

    - by KoolKabin
    hi guys, I am trying to display my data in datagridview. I created a class with different property and used its list as the datasource. it worked fine. but i got confused how to do that in case we have nested class. My Classes are as follows: class Category property UIN as integer property Name as string end class class item property uin as integer property name as string property mycategory as category end class my data list as follows: dim myDataList as list(of Item) = new List(of Item) myDataList.Add(new Item(1,"item1",new category(1,"cat1"))) myDataList.Add(new Item(2,"item2",new category(1,"cat1"))) myDataList.Add(new Item(3,"item3",new category(1,"cat1"))) myDataList.Add(new Item(4,"item4",new category(2,"cat2"))) myDataList.Add(new Item(5,"item5",new category(2,"cat2"))) myDataList.Add(new Item(6,"item6",new category(2,"cat2"))) Now I binded the datagridview control like: DGVMain.AutoGenerateColumns = False DGVMain.ColumnCount = 3 DGVMain.Columns(0).DataPropertyName = "UIN" DGVMain.Columns(0).HeaderText = "ID" DGVMain.Columns(1).DataPropertyName = "Name" DGVMain.Columns(1).HeaderText = "Name" DGVMain.Columns(2).DataPropertyName = "" **'here i want my category name** DGVMain.Columns(2).HeaderText = "category" DGVMain.datasource = myDataList DGVMain.refresh() I have tried using mycategory.name but it didn't worked. What can be done to get expected result? Is there any better idea other than this to accomplish the same task?

    Read the article

  • Importing data from third party datasource (open architecture design )

    - by mare
    How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML? For instance, let us say we have a Products table in our DB which has columns Id Title Description TaxLevel Price and on the other side we have for instance Products: ProductId ProdTitle Text BasicPrice Quantity. Currently I do it like this: Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.). Then I have methods like this: InsertProduct(ThirdPartyProduct newproduct) I do not use interfaces at the moment but I would like them to. What I would like is implement something like public class Contoso_ProductSynchronization : ProductSynchronization InsertProduct(ContosoProduct p) where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization). Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Hide the last children of an XML data provider (not to be displayed in mx:Tree)

    - by Andree
    Hi there! I have an hierachical XML data, and I want to display it as a tree. The XML data is something like this: <services> <service name="TestService" endpoint=""> <method name="sayHi"> <parameter name="firstName" /> <parameter name="lastName" /> </method> <method name="sayGoodbye"> <parameter name="firstName" /> <parameter name="lastName" /> </method> </service> <service name="AnotherTestService"> <method name="getName" /> <method name="setName"> <parameter name="name" /> </method> </service> </services> However, I don't want to display the last elements (parameters) in the tree because, well, it is not meant to be displayed, but still it has to be there. Any advice on how can I achieve that? I am thinking of either extending DefaultDataDescriptor or create a class which implement ITreeDataDescriptor. Which approach will be suitable in this case?

    Read the article

  • Submit Data to a datatable from a View MVC

    - by user54197
    I have a view that I would like to populate data when the next button is clicked. It is 3 Views which will send data on every next button. How do I do this? Below is code I just made up, but should give an idea of what I am looking for. Page 1: <table> <tr> <td><b>Name:</b></td> <td colspan="2"><input id="txtName" type="text" /></td> </tr> </table> <input type="submit" value="Next" /> Page 2: <table> <tr> <td><b>Address:</b></td> <td colspan="2"><input id="txtAddress" type="text" /></td> </tr> </table> <input type="submit" value="Next" /> Page 3: <table> <tr> <td><b>Phone:</b></td> <td colspan="2"><input id="txtPhone" type="text" /></td> </tr> </table> <input type="submit" value="Next" />

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • OPTICS Clustering algorithm. How to get the best epsilon

    - by Marco Galassi
    I am implementing a project which needs to cluster geographical points. OPTICS algorithm seems to be a very nice solution. It needs just 2 parameters as input(MinPts and Epsilon), which are, respectively, the minimum number of points needed to consider them as a cluster, and the distance value used to compare if two points are in can be placed in same cluster. My problem is that, due to the extreme variety of the points, I can't set a fixed epsilon. Just look at the image below. The same points structure but in a different scale would result very different. Suppose to set MinPts=2 and epsilon = 1Km. On the left, the algorithm would create 2 clusters(red and blue), but on the right it would create one single cluster containing all of the points(red), but I would like to obtain 2 clusters even on the right. So my question is: is there any kind of way to calculate dynamically the epsilon value to get this result? Thank you very much and excuse my for my poor english. Marco

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >