Search Results

Search found 21115 results on 845 pages for 'byte order'.

Page 201/845 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Back Orders for ERP: data model references ?

    - by Patrick Honorez
    I have built an ERP using Sql Server as a back-end. These are the different types of Client documents (there are also Supplier Docs): Order -- impact: BO Delivery Note (also used for returns, with negative quantity) --impact: BO, Stock Invoice --impact: accounting only Credit Note --impact: accounting, BO I use a complex system of self joins (at detail level) to find out the quantities in each OrderDetail that still have a backorder (BO). It'd like to simplify this using a [group] field that could be used through all detail line related to an original order. There are many difficult things to trace: a Return of a product may be due to a defect and thus increase the BO, or it can be just a return, joined with a Credit Note, and then has no impact on BO. My question is: do you know of any real good reference (book, web) for this matter ?

    Read the article

  • filter to reverse lines of a text file

    - by Greg Hewgill
    I'm writing a small shell script that needs to reverse the lines of a text file. Is there a standard filter command to do this sort of thing? My specific application is that I'm getting a list of Git commit identifiers, and I want to process them in reverse order: git log --pretty=oneline work...master | grep -v DEBUG: | cut -d' ' -f1 | reverse The best I've come up with is to implement reverse like this: ... | cat -b | sort -rn | cut -f2- This uses cat to number every line, then sort to sort them in descending numeric order (which ends up reversing the whole file), then cut to remove the unneeded line number. The above works for my application, but may fail in the general case because cat -b only numbers nonblank lines. Is there a better, more general way to do this?

    Read the article

  • Can someone provide a short code example of compiler bootstrapping?

    - by Jatin
    This Turing award lecture by Ken Thompson on topic "Reflections on Trusting Trust" gives good insight about how C compiler was made in C itself. Though I understand the crux, it still hasn't sunk in. So ultimately, once the compiler is written to do lexical analysis, parse trees, syntax analysis, byte code generation etc, a separate machine code is again written to do all that on compiler? Can anyone please explain with a small example of the procedure? Bootstrapping on wiki gives good insights, but only a rough view on it. PS: I am aware of the duplicates on the site, but found them to be an overview which I am already aware

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Is it a good practice to implement aggregate routes in Entity Framework 4?

    - by Kohan
    Having just started working on a new project using Entity Framework 4, I spoke to some of the other team that use NHibernate for advice. They implement aggregate routes on their entities, so instead of adding an order through the orders entity, they would add it through customer.order by having an addOrder method on customer. This is the approach I have taken but I am, alas, running into problems. These are issues that I hope to work out, but it got me thinking ... Is this a good way to work or am I fighting an uphill battle unnecessarily?

    Read the article

  • changing the htaccess file in php

    - by tibin mathew
    Hi, I want to change the maximum file upload size in my website, for this i'm going to add some code lines in my .htaccess file. i have searched in google and i got the lines of code to add in .htaccess file. But i don't know exactly were to add that lines of code . Below is the lines of code currently in my .htaccess file code below -FrontPage- IndexIgnore .htaccess /.?? *~ *# /HEADER */README* */_vti* order deny,allow deny from all allow from all order deny,allow deny from allenter code here AuthName www.mysite.com AuthUserFile /www/htdocs/domains/s11/01712/www.mysite.com/webdocs/_vti_pvt/service.pwd AuthGroupFile /www/htdocs/domains/s11/01712/www.mysite.com/webdocs/_vti_pvt/service.grp AddHandler php5-script .php Here is the code to add php_value post_max_size 50M php_value upload_max_filesize 50M how to add this lines of code in .htaccess file and where?? Please help me.. Thanks

    Read the article

  • IQueryable and lazy loading

    - by Nelson
    I'm having a hard time determining the best way to handle this... With Entity Framework (and L2S), LINQ queries return IQueryable. I have read various opinions on whether the DAL/BLL should return IQueryable, IEnumerable or IList. Assuming we go with IList, then the query is run immediately and that control is not passed on to the next layer. This makes it easier to unit test, etc. You lose the ability to refine the query at higher levels, but you could simply create another method that allows you to refine the query and still return IList. And there are many more pros/cons. So far so good. Now comes Entity Framework and lazy loading. I am using POCO objects with proxies in .NET 4/VS 2010. In the presentation layer I do: foreach (Order order in bll.GetOrders()) { foreach (OrderLine orderLine in order.OrderLines) { // Do something } } In this case, GetOrders() returns IList so it executes immediately before returning to the PL. But in the next foreach, you have lazy loading which executes multiple SQL queries as it gets all the OrderLines. So basically, the PL is running SQL queries "on demand" in the wrong layer. Is there any sensible way to avoid this? I could turn lazy loading off, but then what's the point of having this "feature" that everyone was complaining EF1 didn't have? And I'll admit it is very useful in many scenarios. So I see several options: Somehow remove all associations in the entities and add methods to return them. This goes against the default EF behavior/code generation and makes it harder to do some composite (multiple entity) LINQ queries. It seems like a step backwards. I vote no. If we have lazy loading anyway which makes it hard to unit test, then go all the way and return IQueryable. You'll have more control farther up the layers. I still don't think this is a good option because IQueryable ties you to L2S, L2E, or your own full implementation of IQueryable. Lazy loading may run queries "on demand", but doesn't tie you to any specific interface. I vote no. Turn off lazy loading. You'll have to handle your associations manually. This could be with eager loading's .Include(). I vote yes in some specific cases. Keep IList and lazy loading. I vote yes in many cases, only due to the troubles with the others. Any other options or suggestions? I haven't found an option that really convinces me.

    Read the article

  • Interesting interview question. .Net

    - by rahul
    Coding Problem NumTrans There is an integer K. You are allowed to add to K any of its divisors not equal to 1and K. The same operation can be applied to the resulting number and so on. Notice that starting from the number 4, we can reach any composite number by applying several such operations. For example, the number 24 can be reached starting from 4 using 5 operations: 468121824 You will solve a more general problem. Given integers n and m, return the minimal number of the described operations necessary to transform n into m. Return -1 if m can't be obtained from n. Definition Method signature: int GetLeastCount (int n, int m) Constraints N will be between 4 and 100000, inclusive. M will be between N and 100000, inclusive. Examples 1) 4 576 Returns: 14 The shortest order of operations is: 468121827365481108162243324432576 2) 8748 83462 Returns: 10 The shortest order of operations is: 874813122196832624439366590497873283106834488346083462 3) 4 99991 Returns: -1 The number 99991 can't be obtained because it’s prime!

    Read the article

  • How can I await the first completed async task of a list in .Net?

    - by Eyal
    My input is a long list of files located on an Amazon S3 server. I'd like to download the metadata of the files, compute the hashes of the local files, and compare the metadata hash with the local files' hash. Currently, I use a loop to start all the metadata downloads asynchronously, then as each completes, compute MD5 on the local file if needed and compare. Here's the code (just the relevant lines): Dim s3client As New AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New List(Of System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(New System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))(lvi, s3client.GetObjectMetadataAsync(gomr))) Next For Each t As System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse)) In responseTasks Dim response As GetObjectMetadataResponse = Await t.Item2 If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Next I've got two problems: I'm awaiting the reponses in the order that I made them. I'd rather process them in the order that they complete so that I get them faster. The MD5 calculation is long and synchronous. I tried making it async but the process locked up. I think that the MD5 task was added to the end of .Net's task list and it didn't get to run until all the downloads completed. Ideally, I process the response as they arrive, not in order, and the MD5 is asynchronous but gets a chance to run. Edit: Incorporating WhenAll, it looks like this now: Dim s3client As New Amazon.S3.AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New Dictionary(Of Task(Of GetObjectMetadataResponse), ListViewItem) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(s3client.GetObjectMetadataAsync(gomr), lvi) Next Dim startTime As DateTimeOffset = DateTimeOffset.Now Do While responseTasks.Count > 0 Dim currentTask As Task(Of GetObjectMetadataResponse) = Await Task.WhenAny(responseTasks.Keys) Dim response As GetObjectMetadataResponse = Await currentTask If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Loop MsgBox((DateTimeOffset.Now - startTime).ToString) The UI locks up momentarily whenever MDSCalcFile is done. The whole loop takes about 45s and the first file's MD5 result happens within 1s of starting. If I change the line to: If response.ETag.Trim(""""c) = Await Task.Run(Function () MD5CalcFile(lvi.SubItems(1).Text)) Then The UI doesn't lock up when MD5CalcFile is done. The whole loop takes about 75s, up from 45s, and the first file's MD5 result happens after 40s of waiting.

    Read the article

  • NHibernate Pitfalls: Lazy Scalar Properties Must Be Auto

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports lazy properties not just for associations (many to one, one to one, one to many, many to many) but also for scalar properties. This allows, for example, only loading a potentially large BLOB or CLOB from the database if and when it is necessary, that is, when the property is actually accessed. In order for this to work, other than having to be declared virtual, the property can’t have an explicitly declared backing field, it must be an auto property: 1: public virtual String MyLongTextProperty 2: { 3: get; 4: set; 5: } 6:  7: public virtual Byte [] MyLongPictureProperty 8: { 9: get; 10: set; 11: } All lazy scalar properties are retrieved at the same time, when one of them is accessed.

    Read the article

  • Reverse rendering of Urdu fonts

    - by Syed Muhammad Umair
    I am working on a project that is based on Urdu language in Ubuntu platform. I'm using Python language and have almost achieved my task. The problem is that, the Urdu text is rendered in reverse order. For example, consider the word ??? (which means work) consisting of the three letters: ? , ? , and ? The output is rendered in reverse order as ??? consisting of the three letters: ?, ?, and ? When copying this text to Open Office or opening the generated XML file in Firefox, the generated result is absolutely desired. I Am using Python 2.6 IDLE, its working perfect with Windows platform, which clearly shows its not the problem of IDLE. Am working on TKINTER GUI library. How can this problem be solved?

    Read the article

  • error trying to display semi transparent rectangle

    - by scott lafoy
    I am trying to draw a semi transparent rectangle and I keep getting an error when setting the textures data. The size of the data passed in is too large or too small for this resource. dummyRectangle = new Rectangle(0, 0, 8, 8); Byte transparency_amount = 100; //0 transparent; 255 opaque dummyTexture = new Texture2D(ScreenManager.GraphicsDevice, 8, 8); Color[] c = new Color[1]; c[0] = Color.FromNonPremultiplied(255, 255, 255, transparency_amount); dummyTexture.SetData<Color>(0, dummyRectangle, c, 0, 1); the error is on the SetData line: "The size of the data passed in is too large or too small for this resource." Any help would be appreciated. Thank you.

    Read the article

  • In Haskell, how can you sort a list of infinite lists of strings?

    - by HaskellNoob
    So basically, if I have a (finite or infinite) list of (finite or infinite) lists of strings, is it possible to sort the list by length first and then by lexicographic order, excluding duplicates? A sample input/output would be: Input: [["a", "b",...], ["a", "aa", "aaa"], ["b", "bb", "bbb",...], ...] Output: ["a", "b", "aa", "bb", "aaa", "bbb", ...] I know that the input list is not a valid haskell expression but suppose that there is an input like that. I tried using merge algorithm but it tends to hang on the inputs that I give it. Can somebody explain and show a decent sorting function that can do this? If there isn't any function like that, can you explain why? In case somebody didn't understand what I meant by the sorting order, I meant that shortest length strings are sorted first AND if one or more strings are of same length then they are sorted using < operator. Thanks!

    Read the article

  • Cannot mount Android phone and sync with Banshee

    - by Brett Alton
    I can't get my LG Optimus One to sync with Banshee. I read somewhere that the root needs to have an empty file called '.is_audio_player'. I did that and it still doesn't mount. I ran dmesg however and it appears that the card is unmounting before I even have a change to run Banshee. [ 7250.321359] usb 1-1.4: new high speed USB device using ehci_hcd and address 10 [ 7250.444795] scsi12 : usb-storage 1-1.4:1.0 [ 7251.567946] scsi 12:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 7251.568839] sd 12:0:0:0: Attached scsi generic sg3 type 0 [ 7252.232433] sd 12:0:0:0: [sdc] 15564800 512-byte logical blocks: (7.96 GB/7.42 GiB) [ 7252.233299] sd 12:0:0:0: [sdc] Write Protect is off [ 7252.233306] sd 12:0:0:0: [sdc] Mode Sense: 03 00 00 00 [ 7252.233309] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235658] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235666] sdc: sdc1 [ 7252.239132] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.239140] sd 12:0:0:0: [sdc] Attached SCSI removable disk [ 7272.573437] usb 1-1.4: USB disconnect, address 10 Suggestions?

    Read the article

  • Disable MOD_PHP in vhosts and activate suphp

    - by mezgani
    I need to desactivate mod_php on a vhost and let it working for other vhosts, I need to disable it in order to activate suphp. here is the vhost config : Options +Indexes ServerName www.native.org ServerAlias native.org DocumentRoot /home/user/www/native/current ServerAdmin [email protected] UseCanonicalName Off CustomLog /var/log/apache2/native_access.log combined ErrorLog /var/log/apache2/native_error.log <Directory /home/user/www/native/current> RemoveHandler .php AllowOverride All Options FollowSymLinks Order allow,deny allow from all </Directory> suPHP_Engine on SuexecUserGroup user native <IfModule mod_suphp.c> suPHP_UserGroup user native AddHandler x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler x-httpd-php </IfModule> NB: mod_php is activated by default for all vhosts

    Read the article

  • How can I structure and recode messy categorical data in R?

    - by briandk
    I'm struggling with how to best structure categorical data that's messy, and comes from a dataset I'll need to clean. The Coding Scheme I'm analyzing data from a university science course exam. We're looking at patterns in student responses, and we developed a coding scheme to represent the kinds of things students are doing in their answers. A subset of the coding scheme is shown below. Note that within each major code (1, 2, 3) are nested non-unique sub-codes (a, b, ...). What the Raw Data Looks Like I've created an anonymized, raw subset of my actual data which you can view here. Part of my problem is that those who coded the data noticed that some students displayed multiple patterns. The coders' solution was to create enough columns (reason1, reason2, ...) to hold students with multiple patterns. That becomes important because the order (reason1, reason2) is arbitrary--two students (like student 41 and student 42 in my dataset) who correctly applied "dependency" should both register in an analysis, regardless of whether 3a appears in the reason column or the reason2 column. How Can I Best Structure Student Data? Part of my problem is that in the raw data, not all students display the same patterns, or the same number of them, in the same order. Some students may do just one thing, others may do several. So, an abstracted representation of example students might look like this: Note in the example above that student002 and student003 both are coded as "1b", although I've deliberately shown the order as different to reflect the reality of my data. My (Practical) Questions Should I concatenate reason1, reason2, ... into one column? How can I (re)code the reasons in R to reflect the multiplicity for some students? Thanks I realize this question is as much about good data conceptualization as it is about specific features of R, but I thought it would be appropriate to ask it here. If you feel it's inappropriate for me to ask the question, please let me know in the comments, and stackoverflow will automatically flood my inbox with sadface emoticons. If I haven't been specific enough, please let me know and I'll do my best to be clearer.

    Read the article

  • Sub-total and total columns

    - by Tass-man
    In Visual Foxpro 9 I am trying to wite an sql with a product "subtotal" column and a report "total" column. The sql code that works is as follows, but when I insert the commented out "Case" code I get errors that seem to increase as I correct the preceeding error. Can any one tell me in which place I should insert the "case" and what is wrong with the code? SELECT qItemSaleLines.ItemID, ; qItems.ItemID, ; qItemSaleLines.SaleID, ; qSales.SaleID, ; qSales.CardRecordID, ; qCustomers.CardRecordID, ; qItems.ItemNumber AS ProdCODE, ; qItems.ItemName AS StkNAME, ; qCustomers.LastName AS CUSTOMER, ; qSales.InvoiceNumber AS SaleINVNo, ; qSales.InvoiceDate AS SaleDATE, ; qItemSaleLines.Quantity AS SaleQTY, ; qItemSaleLines.TaxExclusiveTotal AS SALE, ; qItemSaleLines.CostOfGoodsSoldAmount AS COGS, ; qItemSaleLines.TaxExclusiveTotal - qItemSaleLines.CostOfGoodsSoldAmount AS MARGIN, ; (qItemSaleLines.TaxExclusiveTotal - qItemSaleLines.CostOfGoodsSoldAmount) * / qItemSaleLines.TaxExclusiveTotal AS MPERCENT ; FROM qItemSaleLines, qItems, qSales, qCustomers ; WHERE qSales.CardRecordID = qCustomers.CardRecordID AND qItemSaleLines.SaleID = qSales.SaleID AND ; qItemSaleLines.ItemID = qItems.ItemID AND qSales.InvoiceDate {^2009-06-30} ; ORDER BY qItems.ItemNumber, qSales.InvoiceDate ; *!* (SELECT qItems.ItemID, qItemSaleLines.ItemID, qItemSaleLines.TaxExclusiveTotal, ; *!* CASE WHEN qItems.ItemID = (SELECT TOP 1 qItems.ItemID FROM qItems.ItemID, ; *!* WHERE qItems.ItemID = qItemSaleLines.ItemID, ; *!* ORDER BY qItems.ItemID desc), ; *!* THEN (SELECT SUM(qItemSaleLines.TaxExclusiveTotal) FROM qItemSaleLines.TaxExclusiveTotal,; *!* WHERE qItems.ItemID <= qItemSaleLines.ItemID AND qItems.ItemID = qItemSaleLines.ItemID, ; *!* ELSE ' ' END AS 'PROD-SALE'), ; *!* CASE WHEN qItems.ItemID = (SELECT TOP 1 qItems.ItemID FROM qItems.ItemID, ; *!* ORDER BY qItems.ItemID desc), ; *!* THEN (SELECT SUM(qItemSaleLines.TaxExclusiveTotal) FROM qItemSaleLines.TaxExclusiveTotal, ; *!* ELSE ' ' END AS 'Grand Total') ;

    Read the article

  • Is there any way to create a dynamic list of strings (based on language) in XAML?

    - by Xin
    Just wondering if it is possible to dynamically create a list of strings in XAML based on language/culture? Say if user logs in as an English user it shows Client Name, Order Number... and if user logs in as a Polish user it shows Nazwa klienta, Numer zamówienia instead? I only know the hardcoded one like below: <System_Collections_Generic:List`1 x:Key="columnNameList"> <System:String>Client Name</System:String> <System:String>Order Number</System:String> <System:String>Date</System:String> </System_Collections_Generic:List`1>

    Read the article

  • C# Thread Queue Synchronize

    - by ikurtz
    Greetings, I am trying to play some audio files without holding up the GUI. Below is a sample of the code: if (audio) { if (ThreadPool.QueueUserWorkItem(new WaitCallback(CoordinateProc), fireResult)) { } else { MessageBox.Show("false"); } } if (audio) { if (ThreadPool.QueueUserWorkItem(new WaitCallback(FireProc), fireResult)) { } else { MessageBox.Show("false"); } } if (audio) { if (ThreadPool.QueueUserWorkItem(new WaitCallback(HitProc), fireResult)) { } else { MessageBox.Show("false"); } } The situation is the samples are not being played in order. some play before the other and I need to fix this so the samples are played one after another in order. How do I implement this please? Thank you.

    Read the article

  • New to programming

    - by Shaun
    I have a form (Quote) with an auto-number ID, on the form at the moment are two subforms that show different items (sub 1 shows partition modules sub 2 shows partition abutments) both forms use the same parts tables to build them. Both forms are linked to the quote form using the ID. All works well until the forms is refreshed or re-loaded, subform 1 shows the module names and quantities and blank spaces for the abutment names but shows the quantiews for the abutments, the reverse of this is shown in the abutments subform 2. When the lists for the variuos types and the detailed parts lists are printed they are correct. This seems to be only a visual problem. All based on Access 2003. Subform 1 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID; Subform 2 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID;

    Read the article

  • How do I get a linq to sql group by query into the asp.net mvc view?

    - by Brad Wetli
    Sorry for the newbie question, but I have the following query that groups parking spaces by their garage, but I can't figure out how to iterate the data in the view. I guess I should strongly type the view but am a newbie and having lots of problems figuring this out. Any help would be appreciated. Public Function FindAllSpaces() Implements ISpaceRepository.FindAllSpaces Dim query = _ From s In db.spaces _ Order By s.name Ascending _ Group By s.garageid Into spaces = Group _ Order By garageid Ascending Return query End Function The controller is taking the query object as is and putting it into the viewdata.model and as stated the view is not currently strongly typed as I haven't been able to figure out how to do this. I have run the query successfully in linqpad.

    Read the article

  • adding DATE_SUB to query to return range of values in mysql

    - by ian
    Here is my original query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); This returns all the songs in my DB and then joins data from my favorites table so I can display wich items a return visitors has clicked as favorites or not. Visitors are recognized by a unique has storred in a cookie and in the favorites table. I need to alter this query so that I can get just the last months worth of songs. Below is my attempt at adding DATE_SUB to my query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s WHERE `date` >= DATE_SUB( NOW( ) , INTERVAL 1 MONTH ) LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); Suggestions?

    Read the article

  • How to use constraint programming for optimizing shopping baskets?

    - by tangens
    I have a list of items I want to buy. The items are offered by different shops and different prices. The shops have individual delivery costs. I'm looking for an optimal shopping strategy (and a java library supporting it) to purchase all of the items with a minimal total price. Example: Item1 is offered at Shop1 for $100, at Shop2 for $111. Item2 is offered at Shop1 for $90, at Shop2 for $85. Delivery cost of Shop1: $10 if total order < $150; $0 otherwise Delivery cost of Shop2: $5 if total order < $50; $0 otherwise If I buy Item1 and Item2 at Shop1 the total cost is $100 + $90 +$0 = $190. If I buy Item1 and Item2 at Shop2 the total cost is $111 + $85 +$0 = $196. If I buy Item1 at Shop1 and Item2 at Shop2 the total cost is $100 + $10 + $85 + $0 = 195. I get the minimal price if I order Item1 and Item2 at Shop1: $190 What I tried so far I asked another question before that led me to the field of constraint programming. I had a look at cream and choco, but I did not figure out how to create a model to solve my problem. | shop1 | shop2 | shop3 | ... ----------------------------------------- item1 | p11 | p12 | p13 | item2 | p21 | p22 | p23 | . | | | | . | | | | ----------------------------------------- shipping | s1 | s2 | s3 | limit | l1 | l2 | l3 | ----------------------------------------- total | t1 | t2 | t3 | ----------------------------------------- My idea was to define these constraints: each price "p xy" is defined in the domain (0, c) where c is the price of the item in this shop only one price in a line should be non zero if one or more items are bought from one shop and the sum of the prices is lower than limit, then add shipping cost to the total cost shop total cost is the sum of the prices of all items in a shop total cost is the sum of all shop totals The objective is "total cost". I want to minimize this. In cream I wasn't able to express the "if then" constraint for conditional shipping costs. In choco these constraints exist, but even for 5 items and 10 shops the program was running for 10 minutes without finding a solution. Question How should I express my constraints to make this problem solvable for a constraint programming solver?

    Read the article

  • ordering an acts_as_tree relationship

    - by timpone
    I have a Category class that is defined like this: class Catergoy < ActiveRecord::Base acts_as_tree :parent_id I'd like the ordering to be by the position value which is a float such that: category-1 category-2, parent_id=1, position=0.5 category-3, parent_id=2, category-4, parent_id=1, position=1 How would I specify this? I tried acts_as_tree :parent_id :order => :position acts_as_tree :parent_id, :order => :position but these are not working. Any ideas how to specify this relationship? Or if I'm missing something else? thx in advance

    Read the article

  • Cucumber testing with rails on mongoid-gridfs

    - by Deepak Lamichhane
    I am getting this weird error while running cucumber test: ERROR Mongo::OperationFailure: Database command 'filemd5' failed: {"errmsg"="exception: best guess plan requested, but scan and order required: query: { files_id: ObjectId('4d1abab3a15c84139c00006e') } order: { files_id: 1, n: 1 } choices: { $natural: 1 } ", "code"=13284, "ok"=0.0} I have a list of similar scenarios, where first scenario passes but all the other following scenario fails. I searched for it and I found that there is problem with indexing. But, I am not sure about what query to write. Furthermore, I can add the query on the mongo of the development. I want to make sure that the indexing is done in test too. If anyone has any idea on this, feel free.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >