Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 627/788 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • SQL INSTR() using CSV. Need exact match rather than part

    - by Alastair Pitts
    This is a follow up issue relating to the answer for http://stackoverflow.com/questions/2445029/sql-placeholder-in-where-in-issue-inserted-strings-fail Quick background: We have a SQL query that uses a placeholder value to accept a string, which represents a unique tag/id. Usually, this is only a single tag, but we needed the ability to use a csv string for multiple tags, returning a combined result. In the answer we received from the vendor, they suggested the use of the INSTR function, ala: select * from pitotal where tag IN (SELECT tag from pipoint WHERE INSTR(?, tag) <> 0) and time between 'y' and 't' This works perfectly well 99% of the time, the issue is when the tag is also a subset of 2 parts of the CSV string. Eg the placeholder value is: 'northdom,southdom,eastdom,westdom' and possible tags include: north or northdom What happens, as north is a subset of northdom, is that the two tags are return instead of just northdom, which is actually what we want. I'm not strong on SQL so I couldn't work out how to set it as exact, or split the csv string, so help would be appreciated. Is there a way to split the csv string or make it look for an exact match?

    Read the article

  • Does the COM server have to call SysFreeString() for an [out] parameter?

    - by sharptooth
    We have the following interface: [object, uuid("uuidhere"), dual ] interface IInterface : IDispatch { [id(1), propget] HRESULT CoolProperty( [out, retval] BSTR* result ); } Now there's a minor problem. On one hand the parameter is "out" and so any value can be passed as input, the parameter will become valid only upon the successful return. On the other hand, there's this MSDN article which is linked to from many pages that basically says (the last paragraph) that if any function is passed a BSTR* it must free the string before assigning a new string. That's horrifying. If that article is right it means that all the callers must surely pass valid BSTRs (maybe null BSTRs), otherwise BSTR passed can be leaked. If the caller passed a random value and the callee tries to call SysFreeString() it runs into undefined behavior, so the convention is critical. Then what's the point in the [out] attribute? What will be the difference between the [in, out] and [out] in this situation? Is that article right? Do I need to free the passed BSTR [out] parameter before assigning a new one?

    Read the article

  • Fulltext search on many tables

    - by Rob
    I have three tables, all of which have a column with a fulltext index. The user will enter search terms into a single text box, and then all three tables will be searched. This is better explained with an example: documents doc_id name FULLTEXT table2 id doc_id a_field FULLTEXT table3 id doc_id another_field FULLTEXT (I realise this looks stupid but that's because I've removed all the other fields and tables to simplify it). So basically I want to do a fulltext search on name, a_field and another_field, and then show the results as a list of documents, preferably with what caused that document to be found, e.g. if another_field matched, I would display what another_field is. I began working on a system whereby three fulltext search queries are performed and the results inserted into a table with a structure like: search_results table_name row_id score (This could later be made to cache results for a few days with e.g. a hash of the search terms). This idea has two problems. The first is that the same document can be in the search results up to three times with different scores. Instead of that, if the search term is matched in two tables, it should have one result, but a higher score. The second is that parsing the results is difficult. I want to display a list of documents, but I don't immediately know the doc_id without a join of some kind; however the table to join to is dependant on the table_name column, and I'm not sure how to accomplish that. Wanting to search multiple related tables like this must be a common thing, so I guess what I'm asking is am I approaching this in the right way? Can someone tell me the best way of doing it please.

    Read the article

  • Best pratice: How do I implement a list that can be rendered both server-side and client-side?

    - by André Pena
    Technologies involved: ASP.NET Web-forms Javascript (jQuery for instance) Case To make it clearer let's give the Stackoverflow authors list as an example. This list can be manipulated at client-side. I can search, page and so forth. So obviously we would need to call jQuery.ajax to retrieve the HTML of each page given a search. Alright. Now this leaves me with the first question: What is the best way to render the response for the jQuery.ajax at server-side? I can't use templates I suppose, so the most obvious solution I think is to create the HTML tags as server-controls and render them as the result of an ASHX request? Is this is best approach? Nice. That solved we have yet another problem: When the user first enters the Authors List the first list page should already come from the server completely rendered alright? Of course we could render the first page as well as an ajax call but I don't think it's better. This time I CAN use templates to render the list but this template couldn't be reused in case 1. What do I do? Now the final question: Now we have 2 rendering strategies: 1) Client and 2) Server. How do I reuse code for the 2 renderings? What are the best pratices for solving these problems?

    Read the article

  • Hg: How to do a rebase like git's rebase

    - by jpswain09
    Hey guys, In Git I can do this: 1. Start working on new feature: $ git co -b newfeature-123 # (a local feature development branch) do a few commits (M, N, O) master A---B---C \ newfeature-123 M---N---O 2. Pull new changes from upstream master: $ git pull (master updated with ff-commits) master A---B---C---D---E---F \ newfeature-123 M---N---O 3. Rebase off master so that my new feature can be developed against the latest upstream changes: (from newfeature-123) $ git rebase master master A---B---C---D---E---F \ newfeature-123 M---N---O I want to know how to do the same thing in Mercurial, and I've scoured the web for an answer, but the best I could find was this: http://www.selenic.com/pipermail/mercurial/2007-June/013393.html That link provides 2 examples: 1. I'll admit that this: (replacing the revisions from the example with those from my own example) hg up -C F hg branch -f newfeature-123 hg transplant -a -b newfeature-123 is not too bad, except that it leaves behind the pre-rebase M-N-O as an unmerged head and creates 3 new commits M',N',O' that represent them branching off the updated mainline. Basically the problem is that I end up with this: master A---B---C---D---E---F \ \ newfeature-123 \ M'---N'---O' \ newfeature-123 M---N---O this is not good because it leaves behind local, unwanted commits that should be dropped. The other option from the same link is hg qimport -r M:O hg qpop -a hg up F hg branch newfeature-123 hg qpush -a hg qdel -r qbase:qtip and this does result in the desired graph: master A---B---C---D---E---F \ newfeature-123 M---N---O but these commands (all 6 of them!) seem so much more complicated than $ git rebase master I want to know if this is the only equivalent in Hg or if there is some other way available that is simple like Git. Thanks!! Jamie

    Read the article

  • Why does my data not pass into my view correctly?

    - by dmanexe
    I have a model, view and controller not interacting correctly, and I do not know where the error lies. First, the controller. According to the Code Igniter documentation, I'm passing variables correctly here. function view() { $html_head = array( 'title' => 'Estimate Management' ); $estimates = $this->Estimatemodel->get_estimates(); $this->load->view('html_head', $html_head); $this->load->view('estimates/view', $estimates); $this->load->view('html_foot'); } The model (short and sweet): function get_estimates() { $query = $this->db->get('estimates')->result(); return $query; } And finally the view, just to print the data for initial development purposes: <? print_r($estimates); ?> Now it's undefined when I navigate to this page. However, I know that $query is defined, because it works when I run the model code directly in the view.

    Read the article

  • Trying to return data asynchronously using jQuery AND jSon in MVC 2.0

    - by Calibre2010
    Hi, I am trying to use Jquerys getJSON method to return data server side to my browser, what happens is the url the getJSON method points to does get reached but upon the postback the result does not get returned to the browser for some odd reason. I wasen't sure if it was because I was using MVC 2.0 and jQuery 1.4.1 and it differs to the MVC 1.0 version and jQuerys 1.3.2 version. . this is the code sections Controller public JsonResult StringReturn() { NameDTO myName = new NameDTO(); myName.nameID = 1; myName.name= "James"; myName.nameDescription = "Jmaes"; return Json(myName); } View with JQuery <script type="text/javascript"> $(document).ready(function () { $("#myButton").click(function () { $.getJSON("Home/StringReturn/", null, function (data) { alert(data.name); $("#show").append($("<div>" + data.name + "</div>")); }); }); }); </script> HTML <input type="button" value="clickMe" id="myButton"/> <div id="show">d</div>

    Read the article

  • Problem with dropdownbox length

    - by vikitor
    Hello, I'm creating a javascript method that populates lists depending on a radio button selected previously. As it depends on animals or plants to populate it, my problem comes when I have to populate it after it's already been populated. I mean, the plants dropdownlist has 88 elements, and the animals is 888, when I try to come back from animals to plants, I get some of the animals. I know that my controller method is working properly because it returns the values I select, so the problem is the javascript method. Here is the code: if(selector == "sOrder") alert(document.getElementById(selector).options.length); for (i = 0; i < document.getElementById(selector).options.length; i++) { document.getElementById(selector).remove(i); } if (selector == "sOrder") alert(document.getElementById(selector).options.length); document.getElementById(selector).options[0] = new Option("-select-", "0", true, true); for (i = 1; i <= data.length; i++) { document.getElementById(selector).options[i] = new Option(data[i - 1].taxName, data[i - 1].taxRecID);} Here is the strange thing, when I enter the method I try to erase all the elements of the dropdownlist in order to populate it afterwards. As sOrder is the same selector I had previously selected, I get the elements, the thing is that the first alert I get the proper result, 888, but in the second alert, I should get a 0 right? It shows 444, so when I populate it again it just overrides the first 88 plants and then animals till 444. What am I doing wrong? Thank you all in advance, Victor

    Read the article

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

  • c# Most efficient way to combine two objects

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • My Zend Framework 'quoting' mess.

    - by tharkun
    I've got a probably very simple issue to which I can't find a satisfactory (subjectively seen) answer in the Zend Framework manual or elsewhere... There are so many ways how I can hand over my php variables to my sql queries that I lost the overview and probably I lack some understanding about quoting in general. Prepared Statements $sql = "SELECT this, that FROM table WHERE id = ? AND restriction = ?"; $stmt = $this->_db->query($sql, array($myId, $myValue)); $result = $stmt->fetchAll(); I understand that with this solution I don't need to quote anything because the db handles this for me. Querying Zend_Db_Table and _Row objects over the API $users = new Users(); a) $users->fetchRow('userID = ' . $userID); b) $users->fetchRow('userID = ' . $users->getAdapter()->quote($userID, 'INTEGER')); c) $users->fetchRow('userID = ?', $userID); d) $users->fetchRow('userID = ?', $users->getAdapter()->quote($userID, 'INTEGER')); Questions I understand that a) is not ok because it's not quoted at all. But what about the other versions, what's the best? Is c) being treated like a statement and automatically quoted or do I need to use d) when I use the ? identifier?

    Read the article

  • SQL Query - group by more than one column, but distinct

    - by Ranhiru
    I have a bidding table, as follows: SellID INT FOREIGN KEY REFERENCES SellItem(SellID), CusID INT FOREIGN KEY REFERENCES Customer(CusID), Amount FLOAT NOT NULL, BidTime DATETIME DEFAULT getdate() Now in my website I need to show the user the current bids; only the highest bid but without repeating the same user. SELECT CusID, Max(Amount) FROM Bid WHERE SellID = 10 GROUP BY CusID ORDER BY Max(Amount) DESC This is the best I have achieved so far. This gives the CusID of each user with the maximum bid and it is ordered ascending. But I need to get the BidTime for each result as well. When I try to put the BidTime in to the query: SELECT CusID, Max(Amount), BidTime FROM Bid WHERE SellID = 10 GROUP BY CusID ORDER BY Max(Amount) DESC I am told that "Column 'Bid.BidTime' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause." Thus I tried: SELECT CusID, Max(Amount), BidTime FROM Bid WHERE SellID = 10 GROUP BY CusID, BidTime ORDER BY Max(Amount) DESC But this returns all the rows. No distinction. Any suggestions on solving this issue?

    Read the article

  • HashMap key problems

    - by Peterdk
    I'm profiling some old java code and it appears that my caching of values using a static HashMap and a access method does not work. Caching code (a bit abstracted): static HashMap<Key, Value> cache = new HashMap<Key, Value>(); public static Value getValue(Key key){ System.out.println("cache size="+ cache.size()); if (cache.containsKey(key)) { System.out.println("cache hit"); return cache.get(key); } else { System.out.println("no cache hit"); Value value = calcValue(); cache.put(key, value); return value; } } Profiling code: for (int i = 0; i < 100; i++) { getValue(new Key()); } Result output: cache size=0 no cache hit (..) cache size=99 no cache hit It looked like a standard error in Key's hashing code or equals code. However: new Key().hashcode == new Key().hashcode // TRUE new Key().equals(new Key()) // TRUE What's especially weird is that cache.put(key, value) just adds another value to the hashmap, instead of replacing the current one. So, I don't really get what's going on here. Am I doing something wrong?

    Read the article

  • xml declaration not being omitted from page

    - by Mark Schultheiss
    I have an XSLT transform I am using to process an XML file, inserting it into the body of my aspx page. Reference the following for background information: background on xml/xslt I have the following in my xml file: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:myCustomStrings="urn:customStrings"> <xsl:output method="xml" version="2.0" media-type="text/html" omit-xml-declaration="yes" indent="yes" />...unrelated stuff left out here Here is the output that is relevent: <div id="example" /> <?xml version="1.0" encoding="utf-8"?><div xmlns:myCustomStrings="urn:customStrings"><div id="imFormBody" class="imFormBody"> My question relates to the output, specifically to the <?xml version="1.0" encoding="utf-8"?> which is getting included in the output anyway. Is the issue related to the custom method I have used? If so, I don't really see the need to include the xml part as the namespace is in the div tag. Is there a way to ensure that this extra stuff gets left out as I asked it to?

    Read the article

  • When can a == b be false and a.Equals(b) true?

    - by alastairs
    I ran into this situation today. I have an object which I'm testing for equality; the Create() method returns a subclass implementation of MyObject. MyObject a = MyObject.Create(); MyObject b = MyObject.Create(); a == b; // is false a.Equals(b); // is true Note I have also over-ridden Equals() in the subclass implementation, which does a very basic check to see whether or not the passed-in object is null and is of the subclass's type. If both those conditions are met, the objects are deemed to be equal. The other slightly odd thing is that my unit test suite does some tests similar to Assert.AreEqual(MyObject.Create(), MyObject.Create()); // Green bar and the expected result is observed. Therefore I guess that NUnit uses a.Equals(b) under the covers, rather than a == b as I had assumed. Side note: I program in a mixture of .NET and Java, so I might be mixing up my expectations/assumptions here. I thought, however, that a == b worked more consistently in .NET than it did in Java where you often have to use equals() to test equality.

    Read the article

  • Problems with WCF endpoints hosted from Windows Service

    - by Dilip
    I have a managed Windows Service that hosts a couple of WCF endpoints. The service is set to start automatically when the PC is restarted. On reboot I find that this line of code: ServiceHost wcfHost1 = new ServiceHost(typeof(WCFHost1)); in the OnStart() method of the service takes somewhere between 15 - 20 seconds to execute. Actually I have two such statements but the second one executes in a flash. It is the first one that takes so long. Does anyone know what could be causing the bottleneck? Because of this, sometimes the call exceeds 30 seconds and as a result the SCM thinks my service timed out while trying to initialize itself. Now, I know its easy for me to just spin off a thread to do this and return from OnStart() right away but I'd like to know what could cause this delay. This happens only when the service starts up on PC reboot. If the PC is up and running, the service starts & stops in less than a second.

    Read the article

  • how I can print WPF treeview items over multiple pages?

    - by RAJKISHOR
    Hello friend, I want to print tree structure showing in WPF treeview control in multiple page. I tried PrintVisual() but it only prints only visible parts. Then I tried FlowDocument and written AddNode(), but its not showing the same result as treeview doimg. Please help me with code. public void AddNodes(int uid, ListItem tSubNode) { string query = "select fullname, id from members where refCode=" + uid + ";"; String memValue; MySqlCommand cmd = new MySqlCommand(query, db.conn); MySqlDataAdapter _DA = new MySqlDataAdapter(cmd); DataTable _DT = new DataTable(); _DA.Fill(_DT); ListOffset += 20; foreach (DataRow _dr in _DT.Rows) { ListItem tNode = new ListItem(); tNode.Margin = new Thickness(ListOffset,0,0,0); memValue = _dr["fullname"].ToString() + " (" + _dr["id"].ToString() + ")"; tNode.Blocks.Add(new Paragraph(new Run(hyp+memValue))); myList.ListItems.Add(tNode); flowDoc.Blocks.Add(myList); _fdrMembers.Document = flowDoc; if (db.HasMembers(Convert.ToInt32(_dr["id"].ToString()))) { AddNodes(Convert.ToInt32(_dr["id"]), tNode); } } ListOffset = 20; } private void button_Click(object sender, RoutedEventArgs e) { ListOffset = 0; myList.ListItems.Clear(); tSuper.Blocks.Clear(); if (db.GetNameByUID(100001) != null) { tSuper.Blocks.Add(new Paragraph(new Run(db.GetNameByUID(100001)))); myList.ListItems.Add(tSuper); AddNodes(100001, tSuper); } MessageBox.Show("Member by ID - "+does.ToString()+", "+dosnt.ToString(), "Error!", MessageBoxButton.OK, MessageBoxImage.Information); }**strong text**

    Read the article

  • Is there any way to optimize this LINQ where clause that searches for multiple keywords on multiple

    - by Daniel T.
    I have a LINQ query that searches for multiple keywords on multiple columns. The intention is that the user can search for multiple keywords and it will search for the keywords on every property in my Media entity. Here is a simplified example: var result = repository.GetAll<Media>().Where(x => x.Title.Contains("Apples") || x.Description.Contains("Apples") || x.Tags.Contains("Apples") || x.Title.Contains("Oranges") || x.Description.Contains("Oranges") || x.Tags.Contains("Oranges") || x.Title.Contains("Pears") || x.Description.Contains("Pears") || x.Tags.Contains("Pears") ); In other words, I want to search for the keywords Apples, Oranges, and Pears on the columns Title, Description, and Tags. The outputted SQL looks like this: SELECT * FROM Media this_ WHERE (((((((( this_.Title like '%Apples%' or this_.Description like '%Apples%') or this_.Tags like '%Apples%') or this_.Title like '%Oranges%') or this_.Description like '%Oranges%') or this_.Tags like '%Oranges%') or this_.Title like '%Pears%') or this_.Description like '%Pears%') or this_.Tags like '%Pears%') Is this the most optimal SQL in this case? If not, how do I rewrite the LINQ query to create the most optimal SQL statement? I'm using SQLite for testing and SQL Server for actual deployment.

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

  • Download file in coldfusion and read its content

    - by Deepak
    cfhttp with a get to download the files. Does anyone have an example of cfhttp working? Are there special settings that need to be set up on the server side to get this tag to work. When I try the following code: <CFHTTP METHOD = "get" URL="http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?series_id=LNU04032231&years_option=specific_years&to_year=2010&from_year=2009&delimiter=comma&output_view&output_format=excelTable" path="/Users/Deepak" file="testfile.xls"> Nothing comes back to my computer? How do you get it to pop up the "where do you want to save the file box" dialogue box? I am submitting a form in coldfusion by hitting this link http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?series_id=LNU04032231&years_option=specific_years&to_year=2010&from_year=2009&delimiter=comma&output_view&output_format=excelTable I am getting a excel file as a result. How can I save this file on my local box. Or, is it possible to directly read the content of file without saving it in my local box through coldfusion using cfftp or cfhttp? cfhttp.mimeType is application/vnd.ms-excel in this case. Thanks!!

    Read the article

  • What is happening in this T-SQL code?

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • Confused over behavior of List.mapi in F#

    - by James Black
    I am building some equations in F#, and when working on my polynomial class I found some odd behavior using List.mapi Basically, each polynomial has an array, so 3*x^2 + 5*x + 6 would be [|6, 5, 3|] in the array, so, when adding polynomials, if one array is longer than the other, then I just need to append the extra elements to the result, and that is where I ran into a problem. Later I want to generalize it to not always use a float, but that will be after I get more working. So, the problem is that I expected List.mapi to return a List not individual elements, but, in order to put the lists together I had to put [] around my use of mapi, and I am curious why that is the case. This is more complicated than I expected, I thought I should be able to just tell it to make a new List starting at a certain index, but I can't find any function for that. type Polynomial() = let mutable coefficients:float [] = Array.empty member self.Coefficients with get() = coefficients static member (+) (v1:Polynomial, v2:Polynomial) = let ret = List.map2(fun c p -> c + p) (List.ofArray v1.Coefficients) (List.ofArray v2.Coefficients) let a = List.mapi(fun i x -> x) match v1.Coefficients.Length - v2.Coefficients.Length with | x when x < 0 -> ret :: [((List.ofArray v1.Coefficients) |> a)] | x when x > 0 -> ret :: [((List.ofArray v2.Coefficients) |> a)] | _ -> [ret]

    Read the article

  • mocking collection behavior with Moq

    - by Stephen Patten
    Hello, I've read through some of the discussions on the Moq user group and have failed to find an example and have been so far unable to find the scenario that I have. Here is my question and code: // 6 periods var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; // Now the proxy is correct with the schedule helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>(), schedule)); Then in my tests I use Periods but the Mocked _PaymentPlanHelper never populates the collection, see below for usage: public IEnumerable<PaymentPlanPeriod> Periods { get { if (CanCalculateExpression()) _PaymentPlanHelper.GetPlanPeriods(this.ToString(), _PaymentSchedule); return _PaymentSchedule; } } Now if I change the mocked object to use another overloaded method of GetPlanPeriods that returns a List like so : var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>())).Returns(new List<PaymentPlanPeriod>(schedule)); List<PaymentPlanPeriod> result = new _PaymentPlanHelper.GetPlanPeriods(this.ToString()); This works as expected. Any pointers would be awesome, as long as you don't bash my architecture... :) Thank you, Stephen

    Read the article

  • Loading user controls programatically into a placeholder (asp.net(vb))

    - by Phil
    In my .aspx page I have; <%@ Page Language="VB" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" AspCompat="True" %> <%@ Register src="Modules/Content.ascx" tagname="Content" tagprefix="uc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:PlaceHolder ID="Modulecontainer" runat="server"></asp:PlaceHolder> </div> </form> </body> </html> In my aspx.vb I have; Try Dim loadmodule As Control loadmodule = Me.LoadControl("~/modules/content.ascx") Modulecontainer.Controls.Add(loadmodule) Catch ex As Exception Response.Write(ex.ToString & "<br />") End Try The result is an empty placeholder and no errors. Thanks a lot for any assistance

    Read the article

  • TSQL - How to join 1..* from multiple tables in one resultset?

    - by ElHaix
    A location table record has two address id's - mailing and business addressID that refer to an address table. Thus, the address table will contain up to two records for a given addressID. Given a location ID, I need an sproc to return all tbl_Location fields, and all tbl_Address fields in one resultset: LocationID INT, ClientID INT, LocationName NVARCHAR(50), LocationDescription NVARCHAR(50), MailingAddressID INT, BillingAddressID INT, MAddress1 NVARCHAR(255), MAddress2 NVARCHAR(255), MCity NVARCHAR(50), MState NVARCHAR(50), MZip NVARCHAR(10), MCountry CHAR(3), BAddress1 NVARCHAR(255), BAddress2 NVARCHAR(255), BCity NVARCHAR(50), BState NVARCHAR(50), BZip NVARCHAR(10), BCountry CHAR(3) I've started by creating a temp table with the required fields, but am a bit stuck on how to accomplish this. I could do sub-selects for each of the required address fields, but seems a bit messy. I've already got a table-valued-function that accepts an address ID, and returns all fields for that ID, but not sure how to integrate it into my required result. Off hand, it looks like 3 selects to create this table - 1: Location, 2: Mailing address, 3: Billing address. What I'd like to do is just create a view and use that. Any assistance would be helpful. Thanks.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >