Search Results

Search found 21115 results on 845 pages for 'byte order'.

Page 209/845 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Alternates to C++ Reference/Pointer Syntax

    - by Jon Purdy
    What languages other than C and C++ have explicit reference and pointer type qualifiers? People seem to be easily confused by the right-to-left reading order of types, where char*& is "a reference to a pointer to a character", or a "character-pointer reference"; do any languages with explicit references make use of a left-to-right reading order, such as &*char/ref ptr char? I'm working on a little language project, and legibility is one of my key concerns. It seems to me that this is one of those questions to which it's easy for a person but hard for a search engine to provide an answer. Thanks in advance!

    Read the article

  • Delete an object from a tree

    - by mqpasta
    I have a Find function in order to find an element from a BST private Node Find(ref Node n, int e) { if (n == null) return null; if (n.Element == e) return n; if (e > n.Element) return Find(ref n.Right, e); else return Find(ref n.Left, e); } and I use following code in order to get a node and then set this node to null. Node x = bsTree.Find(1); x = null; bsTree.Print(); supposedly, this node should be deleted from Tree as it is set to null but it still exists in tree. I had done this before but this time missing something and no idea what.

    Read the article

  • Neat way of gettings position of my Object in linq collections

    - by Steve
    I currently have a object called Week. A week is part of a Season object. the season can contain many weeks. What I want to do is find the position of my week (is it the first week in the season (so #1) or is it the second (so #2). int i = 0; foreach ( var w in Season.Weeks.OrderBy(w => w.WeekStarts)){ if(w.Id == Id){ return i; } i+=1; } At the moment this is what I have. I order the weeks in a second by there start date to make sure they are in the correct order. and I cycle through them until I find the week that matches the week I am currently looking at. and return the int that I have been counting up.. I feel there should be a easier linq way to do this as it feels pretty messy!

    Read the article

  • Building a linked list with LINQ

    - by FreshCode
    What is the fastest way to order an unordered list of elements by predecessor (or parent) element index using LINQ? Each element has a unique ID and the ID of that element's predecessor (or parent) element, from which a linked list can be built to represent an ordered state. Example ID | Predecessor's ID --------|-------------------- 20 | 81 81 | NULL 65 | 12 12 | 20 120 | 65 The sorted order is {81, 20, 12, 65, 120}. An (ordered) linked list can easily be assembled iteratively from these elements, but can it be done in fewer LINQ statements? Edit: I should have specified that IDs are not necessarily sequential. I chose 1 to 5 for simplicity. See updated element indices which are random.

    Read the article

  • Non US characters in section headers for a UITableView

    - by epatel
    I have added a section list for a simple Core Data iPhone app. I followed this so question to create it - How to use the first character as a section name but my list also contain items starting with characters outside A-Z, specially Å,Ä and Ö used here in Sweden. The problem now is that when the table view shows the section list the three last characters are drawn wrong. See image below It seems like my best option right now is to let those items be sorted under 'Z' if ([letter isEqual:@"Å"] || [letter isEqual:@"Ä"] || [letter isEqual:@"Ö"]) letter = @"Z"; Someone that have figured this one out? And while I'm at it... 'Å', 'Ä' and 'Ö' should be sorted in that order but are sorted as 'Ä', 'Å' and 'Ö' by Core Data NSSortDescriptor. I have tried to set set the selector to localizedCaseInsensitiveCompare: but that gives a out of order section name 'Ä. Objects must be sorted by section name' error. Seen that too?

    Read the article

  • Which relational databases exist with a public API for a high level language?

    - by Jens Schauder
    We typically interface with a RDBMS through SQL. I.e. we create a sql string and send it to the server through JDBC or ODBC or something similar. Are there any RDBMS that allow direct interfacing with the database engine through some API in Java, C#, C or similar? I would expect an API that allows constructs like this (in some arbitrary pseudo code): Iterator iter = engine.getIndex("myIndex").getReferencesForValue("23"); for (Reference ref: iter){ Row row = engine.getTable("mytable").getRow(ref); } I guess something like this is hidden somewhere in (and available from) open source databases, but I am looking for something that is officially supported as a public API, so one finds at least a note in the release notes, when it changes. In order to make this a question that actually has a 'best' answer: I prefer languages in the order given above and I will prefer mature APIs over prototypes and research work, although these are welcome as well.

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Looping over some selected values in a stored procedure

    - by macca1
    I'm trying to modify a stored procedure hooked into an ORM tool. I want to add a few more rows based on a loop of some distinct values in a column. Here's the current SP: SELECT GRP = STAT_CD, CODE = REASN_CD FROM dbo.STATUS_TABLE WITH (NOLOCK) Order by STAT_CD, SRT_ORDR For each distinct STAT_CD, I'd also like to insert a REASN_CD of "--" here in the SP. However I'd like to do it before the order by so I can give them negative sort orders so they come in at the top of the list. I'm getting tripped up on how to implement this. Does anyone know how to do this for each unique STAT_CD?

    Read the article

  • Alternatives to C++ Reference/Pointer Syntax

    - by Jon Purdy
    What languages other than C and C++ have explicit reference and pointer type qualifiers? People seem to be easily confused by the right-to-left reading order of types, where char*& is "a reference to a pointer to a character", or a "character-pointer reference"; do any languages with explicit references make use of a left-to-right reading order, such as &*char/ref ptr char? I'm working on a little language project, and legibility is one of my key concerns. It seems to me that this is one of those questions to which it's easy for a person but hard for a search engine to provide an answer. Thanks in advance!

    Read the article

  • How can I improve the below query?

    - by Newbie
    I have the following input. INPUT: TableA ID Sentences --- ---------- 1 I am a student 2 Have a nice time guys! What I need to do is to extract the words from the sentence(s) and insert each individual word in another table OUTPUT: SentenceID WordOccurance Word ---------- ------------ ----- 1 1 I 1 2 am 1 3 a 1 4 student 2 1 Have 2 2 a 2 3 nice 2 4 time 2 5 guys! I was able to get the answer by using the below query ;With numCTE As ( Select rn = 1 Union all Select rn+1 from numCTE where rn<1000) select SentenceID=id, WordOccurance=row_number()over(partition by TableA.ID order by rn), Word = substring(' '+sentences+' ', rn+1, charindex(' ',' '+sentences+' ', rn+1)-rn-1) from TableA join numCTE on rn <= len(' '+sentences+' ') where substring(' '+sentences+' ', rn,1) = ' ' order by id, rn How can I improve this query of mine.? Basically I am looking for a better solution than the one presented Thanks

    Read the article

  • I need to simplify a MySQL sub query for performance - please help

    - by Richard
    I have the following query which is takin 3 seconds on a table of 1500 rows, does someone know how to simplify it? SELECT dealers.name, dealers.companyName, dealer_accounts.balance FROM dealers INNER JOIN dealer_accounts ON dealers.id = dealer_accounts.dealer_id WHERE dealer_accounts.id = ( SELECT id FROM dealer_accounts WHERE dealer_accounts.dealer_id = dealers.id AND dealer_accounts.date < '2010-03-30' ORDER BY dealer_accounts.date DESC, dealer_accounts.id DESC LIMIT 1 ) ORDER BY dealers.name I need the latest dealer_accounts record for each dealer by a certain date with the join on the dealer_id field on the dealer_accounts table. This really should be simple, I don't know why I am struggling to find something.

    Read the article

  • In CQRS (event-sourced), do you need a global sequence counter in the event store?

    - by Jon M
    In trying to get my head around CQRS (and DDD in general) I have come across situations when two events occur on different aggregates but the order of them has domain meaning. If so then they could happen so close together that a timestamp (as used by the sample implementations I have seen) cannot differentiate them, meaning the event store doesn't contain a 'complete' representation of the domain as there is ambiguity over the order in which events occurred. As an example, the domain could fire a CustomerCreatedEvent which applies to the Customer aggregate, and then a CustomerAssignedToAgent event on the Agent aggregate. The CustomerAssignedToAgent event doesn't make sense if it occurs before the CustomerCreatedEvent, but typically both of these might be fired as a result of one operation which makes it likely that the timestamps would effectively be the same. So am I just modelling things badly? Should there ever be a situation where the sequence of events across different aggregates is important? Or should you keep a global sequence number on your event store, so that you can identify the exact sequence in which events occurred?

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • What are good Opensource Billing Solution with Customer Portal?

    - by krunaloverflow
    I'm looking for an Online Billing Solution along with a Customer Portal in open source space so that I can customize it the way as required in future. My requires are: Flat & Subscription(Periodical) based Billing (orders/invoices) Able to set products/services, its prices add customers or customers can register themselves from client end (a web page) and mange their profile themselves from customer portal, pay online, order products, check invoices, cancel order / subscription, etc. An interface (web service or API) that would provide a mechanism to check when customer recieves service, his current status and then allow them access to the services (similar to subscription membership website ) Please provide me with the options I should consider.

    Read the article

  • Why can't a bind linux service to the loop-back only?

    - by Jon Trauntvein
    I am writing a server application that will provide a service on an ephemeral port that I only want accessible on the loopback interface. In order to do this, I am writing code like the following: struct sockaddr_in bind_addr; memset(&bind_addr,0,sizeof(bind_addr)); bind_addr.sin_family = AF_INET; bind_addr.sin_port = 0; bind_addr.sin_addr.s_addr = htonl(inet_addr("127.0.0.1")); rcd = ::bind( socket_handle, reinterpret_cast<struct sockaddr *>(&bind_addr), sizeof(bind_addr)); The return value for this call to bind() is -1 and the value of errno is 99 (Cannot assign requested address). Is this failing because inet_addr() already returns its result in network order or is there some other reason?

    Read the article

  • Cakephp beforeFind() - How do I add a JOIN condition AFTER the belongsTo association is added?

    - by michael
    I'm in Model-beforeFind($queryData), trying to add a JOIN condition to the queryData on a model which has belongsTo associations. Unfortunately, the new JOIN references a table in the belongsTo association, so it must appear AFTER the belongsTo in the query. Here is my Tagged-belongsTo association: app\plugins\tags\models\tagged.php (line 192) Array ( [Tag] => Array ( [className] => Tag [foreignKey] => tag_id [conditions] => [fields] => [order] => [counterCache] => ) [Group] => Array ( [className] => Group [foreignKey] => foreign_key [conditions] => Array ( [Tagged.model] => Group ) [fields] => [order] => [counterCache] => ) ) Here is the JOIN added in Tagged-beforeFind(), notice that the belongsTo joins have not yet been added: app\plugins\tags\models\tagged.php (line 194) Array ( [conditions] => Array ( [Tag.keyname] => europe ) [fields] => Array ( [0] => DISTINCT Group.* [1] => GroupPermission.* ) [joins] => Array ( [0] => Array ( [table] => permissions [alias] => GroupPermission [foreignKey] => [type] => INNER [conditions] => Array ( [GroupPermission.model] => Group [0] => GroupPermission.foreignId = Group.id [or] => Array ( ... ) ) ) ) [limit] => [offset] => [order] => Array ( [0] => ) [page] => 1 [group] => [callbacks] => 1 [by] => europe [model] => Group ) When I check the output, it fails with "1054: Unknown column 'Group.id' in 'on clause'" because the Permissions join appeared BEFORE the Groups join. SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') WHERE `Tag`.`keyname` = 'europe' But this SQL (with Permissions joined moved to the end) works fine: SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) WHERE `Tag`.`keyname` = 'europe' How do I add my join in beforeFind() after the belongsTo join?

    Read the article

  • Sort collection within collection using Linq

    - by user327066
    Hi, I have a one-to-many Linq query and I would like to sort on a property within the "many" collection. For example in the pseudo-code below, I am returned a List from the Linq query but I would like to sort / order the Products property based on the SequenceNumber property of the Product class. How can I do this? Any information is appreciated. Thanks. class Order { int OrderId; List<Product> Products; } class Product { string name; int SequenceNumber; }

    Read the article

  • node.js storing gamestate, how?

    - by expressnoob
    I'm writing a game in javascript, and to prevent cheating, i'm having the game be played on the server (it's a board game like a more complicated checkers). Since the game is fairly complex, I need to store the gamestate in order to validate client actions. Is it possible to store the gamestate in memory? Is that smart? Should I do that? If so, how? I don't know how that would work. I can also store in redis. And that sort of thing is pretty familiar to me and requires no explanation. But if I do store in redis, the problem is that on every single move, the game would need to get the data from redis and interpret and parse that data in order to recreate the gamestate from scratch. But since moves happen very frequently this seems very stupid to me. What should I do?

    Read the article

  • How to check if string contains a string in string array

    - by Abu Hamzah
    edit: the order might change as you can see in the below example, both string have same name but different order.... How would you go after checking to see if the both string array match? the below code returns true but in a reality its should return false since I have extra string array in the _check what i am trying to achieve is to check to see if both string array have same number of strings. string _exists = "Adults,Men,Women,Boys"; string _check = "Men,Women,Boys,Adults,fail"; if (_exists.All(s => _check.Contains(s))) //tried Equal { return true; } else { return false; }

    Read the article

  • Update int array based on parent

    - by Pickels
    I have the following int[] in my database: '{0}' '{0,0}' '{0,0,0}' '{0,0,0,0}' This column is used to sort my tree data. Now when a parent updates it's order the children should also update. For example if the second record updates it's order to 1 it should result in the following. '{0}' '{0,1}' '{0,1,0}' '{0,1,0,0}' So I was wondering what the query would be to update record 3 and 4. In case it's not clear what I am asking leave a comment I can add additional information. Screenshot of my actual data:

    Read the article

  • How to synchronize two (or n) replication processes for MS SQL databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above?

    Read the article

  • Macro - To create one [.csv] file from/using multiple workbooks, kept in a folder, containing multi

    - by AJ
    Hello, I have more than one Excel Workbooks containing multiple worksheets in each of them. I would like to have a macro which help me to create (combine the information from) all the worksheets into one pipe [|] delimited [.csv] file. These sheets should be combined/appended into the [.csv] file, in the same order these worksbooks appear in a folder and in the order sheets appear in these workbooks. The macro should ask for a delimiter/separator specific to me and the input and output path based on my selection. It would be great if the output [.csv] file is names as "foldername" + "Output.csv" Thank you, Best Regards - AJ

    Read the article

  • Using TDD: "top down" vs. "bottom up"

    - by Christian Mustica
    Since I'm a TDD newbie, I'm currently developing a tiny C# console application in order to practice (because practice makes perfect, right?). I started by making a simple sketchup of how the application could be organized (class-wise) and started developing all domain classes that I could identify, one by one (test first, of course). In the end, the classes have to be integrated together in order to make the application runnable, i.e. placing necessary code in the Main method which calls the necessary logic. However, I don't see how I can do this last integration step in a "test first" manner. I suppose I wouldn't be having these issues had I used a "top down" approach. The question is: how would I do that? Should I have started by testing the Main() method? If anyone could give me some pointers, it will be much appreciated.

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >