Search Results

Search found 17924 results on 717 pages for 'order by'.

Page 577/717 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • Best practices for JQuery namespaces + general purpose utility functions

    - by Armchair Bronco
    What are some current "rules of thumb" for implementing JQuery namespaces to host general purpose utility functions? I have a number of JavaScript utility methods scattered in various files that I'd like to consolidate into one (or more) namespaces. What's the best way to do this? I'm currently looking at two different syntaxes, listed in order of preference: //****************************** // JQuery Namespace syntax #1 //****************************** if (typeof(MyNamespace) === "undefined") { MyNamespace = {}; } MyNamespace.SayHello = function () { alert("Hello from MyNamespace!"); } MyNamespace.AddEmUp = function (a, b) { return a + b; } //****************************** // JQuery Namespace syntax #2 //****************************** if (typeof (MyNamespace2) === "undefined") { MyNamespace2 = { SayHello: function () { alert("Hello from MyNamespace2!"); }, AddEmUp: function (a, b) { return a + b; } }; } Syntax #1 is more verbose but it seems like it would be easier to maintain down the road. I don't need to add commas between methods, and I can left align all my functions. Are there other, better ways to do this?

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • Object Oriented Programming in AS3

    - by Jordan
    I'm building a game in as3 that has balls moving and bouncing off the walls. When the user clicks an explosion appears and any ball that hits that explosion explodes too. Any ball that then hits that explosion explodes and so on. My question is what would be the best class structure for the balls. I have a level system to control levels and such and I've already come up with working ways to code the balls. Here's what I've done. My first attempt was to create a class for Movement, Bounce, Explosion and finally Orb. These all extended each other in the order I just named them. I got it working but having Bounce extend Movement and Explosion extend Bounce, it just doesn't seem very object oriented because what if I wanted to add a box class that didn't move, but did explode? I would need a separate class for that explosion. My second attempt was to create Movement, Bounce and Explosion without extending anything. Instead I passed in a reference to the Orb class to each. Then the class stores that reference and does what it needs to do based on events that are dispatched by the Orb such as update, which was broadcast from Orb every enter frame. This would drive the movement and bounce and also the explosion when the time came. This attempt worked as well but it just doesn't seem right. I've also thought about using Interfaces but because they are more of an outline for classes, I feel like code reuse goes out the window as each class would need its own code for a specific task even if that task is exactly the same. I feel as if I'm searching for some form of multiple inheritance for classes that as3 does not support. Can someone explain to me a better way of doing what I'm attempting to do? Am I being to "Object Oriented" by having classed for Movement, Bounce, Explosion and Orb? Are Interfaces the way to go? Any feedback is appreciated!

    Read the article

  • When should I be cautious using data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • ActiveRecord exceptions not rescued

    - by zoopzoop
    I have the following code block: unless User.exist?(...) begin user = User.new(...) # Set more attributes of user user.save! rescue ActiveRecord::RecordInvalid, ActiveRecord::RecordNotUnique => e # Check if that user was created in the meantime user = User.exists?(...) raise e if user.nil? end end The reason is, as you can probably guess, that multiple processes might call this method at the same time to create the user (if it doesn't already exist), so while the first one enters the block and starts initializing a new user, setting the attributes and finally calling save!, the user might already be created. In that case I want to check again if the user exists and only raise the exception if it still doesn't (= if no other process has created it in the meantime). The problem is, that regularly ActiveRecord::RecordInvalid exceptions are raised from the save! and not rescued from the rescue block. Any ideas? EDIT: Alright, this is weird. I must be missing something. I refactored the code according to Simone's tip to look like this: unless User.find_by_email(...).present? # Here we know the user does not exist yet user = User.new(...) # Set more attributes of user unless user.save # User could not be saved for some reason, maybe created by another request? raise StandardError, "Could not create user for order #{self.id}." unless User.exists?(:email => ...) end end Now I got the following exception: ActiveRecord::RecordNotUnique: Mysql::DupEntry: Duplicate entry '[email protected]' for key 'index_users_on_email': INSERT INTO `users` ... thrown in the line where it says 'unless user.save'. How can that be? Rails thinks the user can be created because the email is unique but then the Mysql unique index prevents the insert? How likely is that? And how can it be avoided?

    Read the article

  • How to name multiple versioned ServiceContracts in the same WCF service?

    - by Tor Hovland
    When you have to introduce a breaking change in a ServiceContract, a best practice is to keep the old one and create a new one, and use some version identifier in the namespace. If I understand this correctly, I should be able to do the following: [ServiceContract(Namespace = "http://foo.com/2010/01/14")] public interface IVersionedService { [OperationContract] string WriteGreeting(Person person); } [ServiceContract(Name = "IVersionedService", Namespace = "http://foo.com/2010/02/21")] public interface IVersionedService2 { [OperationContract(Name = "WriteGreeting")] Greeting WriteGreeting2(Person2 person); } With this I can create a service that supports both versions. This actually works, and it looks fine when testing from soapUI. However, when I create a client in Visual Studio using "Add Service Reference", VS disregards the namespaces and simply sees two interfaces with the same name. In order to differentiate them, VS adds "1" to the name of one of them. I end up with proxies called ServiceReference.VersionedServiceClient and ServiceReference.VersionedService1Client Now it's not easy for anybody to see which is the newer version. Should I give the interfaces different names? E.g IVersionedService1 IVersionedService2 or IVersionedService/2010/01/14 IVersionedService/2010/02/21 Doesn't this defeat the purpose of the namespace? Should I put them in different service classes and get a unique URL for each version?

    Read the article

  • Counting computers for each lab

    - by Irvin
    Alright I have a problem with having to count PCs, and Macs from different labs. In each lab I need to display how many PC and Macs there is available. The data is coming from a SQL server, right am trying sub queries and the use of union, this the closest I can get to what I need. The query below shows me the number of PCs, and Macs in two different columns, but of course, the PCs will be in one row and the Macs on another right below it. Having the lab come up twice. EX: LabName -- PC / MAC Lab1 -- 5 / 0 Lab1 -- 0 / 2 Query SELECT Labs.LabName, COUNT(*),0 AS Mac FROM HardWare INNER JOIN Labs ON HardWare.LabID = Labs.LabID WHERE ComputerStatus = 'AVAILABLE' GROUP BY Labs.LabName UNION SELECT Labs.LabName, COUNT(*), (SELECT COUNT(Manufacturer)) AS Mac FROM HardWare INNER JOIN Labs ON HardWare.LabID = Labs.LabID WHERE ComputerStatus = 'AVAILABLE' AND Manufacturer = 'Apple' GROUP BY Labs.LabName ORDER BY Labs.LabName So is there any way to get them together in one row as in Lab1 -- 5 / 2 or is there a different way to write the query? anything will be a big help, am pretty much stuck here. Cheers

    Read the article

  • RSA_sign and RSACryptoProvider.VerifySignature

    - by Miky D
    I'm trying to get up to speed on how to get some code that uses OpenSSL for cryptography, to play nice with another program that I'm writing in C#, using the Microsoft cryptography providers available in .NET. More to the point, I'm trying to have the C# program verify an RSA message signature generated by the OpenSSL code. The code that generates the signature looks something like this: // Code in C, using the OpenSSL RSA implementation char msgToSign[] = "Hello World"; // the message to be signed char signature[RSA_size(rsa)]; // buffer that will hold signature int slen = 0; // will contain signature size // rsa is an OpenSSL RSA context, that's loaded with the public/private key pair memset(signature, 0, sizeof(signature)); RSA_sign(NID_sha1 , (unsigned char*)msgToSign , strlen(msgToSign) , signature , &slen , rsa); // now signature contains the message signature // and can be verified using the RSA_verify counterpart // .. I would like to verify the signature in C# In C#, I would do the following: import the other side's public key into an RSACryptoServiceProvider object receive the message and it's signature try to verify the signature I've got the first two parts working (I've verified that the public key is loading properly because I managed to send an RSA encrypted text from the C# code to the OpenSSL code in C and successfully have it decrypted) In order to verify the signature in C#, I've tried using the: VerifySignature method of the RSACryptoServiceProvider but that didn't work. And digging around the internet I was only able to find some vague information pointing out that .NET uses a different method for generating the signature than OpenSSL does. So, does anybody know how to accomplish this?

    Read the article

  • Algorithm for scoring user activity

    - by ManBugra
    I have an application where users can: Write reviews about products Add comments to products Up / Down vote reviews Up / Down vote comments Every Up/Down vote is recorded in a db table. What i want to do now is to create a ranking of the most active users in the last 4 weeks. Of course good reviews should be weighted more than good comments. But also e.g. 10 good comments should be weighted more than just one good review. Example: // reviews created in recent 4 weeks //format: [ upVoteCount, downVoteCount ] var reviews = [ [120,23], [32,12], [12,0], [23,45] ]; // comments created in recent 4 weeks // format: [ upVoteCount, downVoteCount ] var comments = [ [1,2], [322,1], [0,0], [0,45] ]; // create weight vector // format: [ reviewWeight, commentsWeight ] var weight = [0.60, 0.40]; // signature: activties..., activityWeight var userActivityScore = score(reviews, comments, weight); ... update user table ... List<Users> users = "from users u order by u.userActivityScore desc"; How would a fair scoring function look like? How could an implementation of the score() function look like? How to add a weight g to the function so that reviews are weighted heavier? How would such a function look like if, for example, votes for pictures would be added?

    Read the article

  • asp.net mvc: What is the correct way to return html from controller to refresh select list?

    - by Mark Redman
    Hi, I am new to ASP.NET MVC, particularly ajax operations. I have a form with a jquery dialog for adding items to a drop-down list. This posts to the controller action. If nothing (ie void method) is returned from the Controller Action the page returns having updated the database, but obviously there no chnage to the form. What would be the best practice in updating the drop down list with the added id/value and selecting the item. I think my options are: 1) Construct and return the html manually that makes up the new <select> tag [this would be easy enough and work, but seems like I am missing something] 2) Use some kind of "helper" to construct the new html [This seems to make sense] 3) Only return the id/value and add this to the list and select the item [This seems like an overkill considering the item needs to be placed in the correct order etc] 4) Use some kind of Partial View [Does this mean creating additional forms within ascx controls? not sure how this would effect submitting the main form its on? Also unless this is reusable by passing in parameters(not sure how thats done) maybe 2 is the option?] UPDATE: Having looked around a bit, it seems that generating html withing the controller is not a good idea. I have seen other posts that render partialviews to strings which I guess is what I need and separates concerns (since the html bits are in the ascx). Any comments on whether that is good practice.

    Read the article

  • Strange XCode debugger behavior with UITableView datasource

    - by Tarfa
    Hey guys. I've got a perplexing issue. In my subclassed UITableViewController my datasource methods lose their tableview reference depending on lines of code I put inside the method. For example, in this code block: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 3; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 5; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { id i = tableView; static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... return cell; } the "id i = tableView;" causes the tableview to become nil (0x0) -- and it causes it to be nil before I ever start stepping into the method. If I insert an assignment statement above the "id i = tableview;" statement: CGFloat x = 5.0; id i = tableView; then tableview retains its pointer (i.e. is not nil) if I place the breakpoint after the "id i = tableView;" line. In other words, the breakpoint must be set after the "id i = tableView"; assignment in order for tableView to retain its pointer. If the breakpoint is set before the assignment is made and I just hang at that breakpoint for a bit then after a couple of seconds the console logs this error message: Assertion failed: (cls), function getName, file /SourceCache/objc4_Sim/objc4-427.5/runtime/objc-runtime-new.mm, line 3990. Although the code works when I don't step through the method, I need my debugger to work! It makes programming kind of challenging when your debugging tools become your enemy. Anyone know what the cause and solution are? Thanks.

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Qt MOC Filename Collisions using multiple .pri files

    - by Skinniest Man
    In order to keep my Qt project somewhat organized (using Qt Creator), I've got one .pro file and multiple .pri files. Just recently I added a class to one of my .pri files that has the same filename as a class that already existed in a separate .pri file. The file structure and makefiles generated by qmake appear to be oblivious to the filename collision that ensues. The generated moc_* files all get thrown into the same subdirectory (either release or debug, depending) and one ends up overwriting the other. When I try to make the project, I get several warnings that look like this: Makefile.Release:318: warning: overriding commands for target `release/moc_file.cpp` And the project fails to link. Here is a simple example of what I'm talking about. Directory structure: + project_dir | + subdir1 | | - file.h | | - file.cpp | + subdir2 | | - file.h | | - file.cpp | - main.cpp | - project.pro | - subdir1.pri | - subdir2.pri Contents of project.pro: TARGET = project TEMPLATE = app include(subdir1.pri) include(subdir2.pri) SOURCES += main.cpp Contents of subdir1.pri: HEADERS += subdir1/file.h SOURCES += subdir1/file.cpp Contents of subdir2.pri: HEADERS += subdir2/file.h SOURCES += subdir2/file.cpp Is there a way to tell qmake to generate a system that puts the moc_* files from separate .pri files into separate subdirectories?

    Read the article

  • How to update application files using patching?

    - by Marek
    I am not interested in any auto update solution, such as ClickOnce or the MS Updater Block. For anyone feeling the urge to ask why not: I am already using these and there is nothing wrong with them, I would just like to learn about any efficient alternatives. I would like to publish patches = small differences that will modify existing files of the deployment with the smallest possible delta. Not only code needs to be patched, but also resource files. Patching the running code can be accomplished by maintaining two separate synchronized copies of the deployment (no on the fly changes to the running executable are required). The application itself can be xcopy deployed (to avoid MSI auto-correcting the modified files or breaking ClickOnce signatures). I would like to learn how to handle different versions of patches (e.g. there is a patch issued that fixes one error and later another patch that fixes another error (in the same file) - users may have any combination of these and there comes a third patch - in text files, this may be easy to implement, but how about executable files? (native Win32 code vs. .NET, any difference?) If the first problem is too hard to solve or unsolvable for executables, I would like to at least learn if there is a solution that implements simple patching with serial revisions - in order to install revision 5, user must have all previous revisions installed to ensure validity of the deployment. Are there any existing solutions to accomplish this? NOTE: There are a few questions on SO that may seem like duplicates, but none with a good answer. This question is about the Windows platform, preferably .NET.

    Read the article

  • Bizarre problem with WPF XAML file.

    - by paxdiablo
    I've just started a very simple WPF application which consists of a main large image and four smaller images. In order to assist with the layout, I created some JPEGs in MsPaint containing the images -2, -1, 0, +1 and +2 and just copied them into the top level of the project directory. The XAML segment contains, for the five images: <Image Grid.Column="1" Grid.Row="2" Grid.ColumnSpan="4" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicture" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/zero.jpg" <Image Grid.Column="1" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus2.jpg" <Image Grid.Column="2" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus1.jpg" <Image Grid.Column="3" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus1.jpg" <Image Grid.Column="4" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus2.jpg" When I try to set the source property for the plus2 image, it complains with a dialog box stating: Property value is not valid. Details | V The file plus2.jpg is not part of the project or its 'Build Action' property is not set to 'Resource'. Yet if I rename the file to plus3.jpg or plus2x.jpg, I don't have that problem. Why is it complaining about plus2.jpg specifically?

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Do I really need bindParam?

    - by sandelius
    Hi there! I'm trying to do a little PDO CRUD to learn some PDO. I have a question about bindParam. Here's my update method right now: public static function update($conditions = array(), $data = array(), $table = '') { self::instance(); // Late static bindings (PHP 5.3) $table = ($table === '') ? self::table() : $table; // Check which data array we want to use $values = (empty($data)) ? self::$_fields : $data; $sql = "UPDATE $table SET "; foreach ($values as $f => $v) { $sql .= "$f = ?, "; } // let's build the conditions self::build_conditions($conditions); // fix our WHERE, AND, OR, LIKE conditions $extra = self::$condition_string; // querystring $sql = rtrim($sql, ', ') . $extra; // let's merge the arrays into on $v_val = array_values($values); $c_val = array_values($conditions); $array = array_merge($v_val, self::$condition_array); $stmt = self::$db->prepare($sql); return $stmt->execute($array); } in my "self::$condition_array" I get all the right values from the ?. SO the query looks like this: UPDATE table SET this = ?, another = ? WHERE title = ? AND time = ? as you can see I dont use bindParams instead I pass the right values in the right order ($array) directly into the execute($array) method. This works like a charm BUT is it safe not use use bindParam here? If not then how can I do it? Thanks from Sweden Tobias

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Why does stored procedure invalidate SQL Cache Dependency?

    - by Fabio Milheiro
    After many hours, I finally realize that I am working correctly with the Cache object in my ASP.NET application but my stored procedures stops it from working correctly. This stored procedure works correctly: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC' AS BEGIN SELECT ID, [Name], Flag, IsDefault FROM dbo.Languages END But this (the one I wanted) doesn't: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC', @TotalRecords INT OUTPUT AS BEGIN SET @TotalRecords = 10 EXEC('SELECT ID, Name, Flag, IsDefault FROM ( SELECT ROW_NUMBER() OVER (ORDER BY ' + @OrderBy + ' ' + @OrderDirection + ') as Row, ID, Name, Flag, IsDefault FROM dbo.Languages) results WHERE Row BETWEEN ((' + @Page + '-1)*' + @ItemsPerPage + '+1) AND (' + @Page + '*' + @ItemsPerPage + ')') END I gave the @TotalRecords parameter the value 10 so you can be sure that the problem is not from the COUNT(*) function which I know is not supported well. Also, when I run it from SQL Server Management Studio, it does exactly what it should do. In the ASP.NET application the results are retrieved correctly, only the cache is somehow unable to work! Can you please help? Maybe a hint I believe that the reason why the dependency HasChanged property is related to the fact that the column Row generated from the ROW_NUMBER is only temporary and, therefore, the SQL SERVER is not able to to say whether the results are changed or not. That's why HasChanged is always set to true. Does anyone know how to paginate results from SQL SERVER without using COUNT or ROW_NUMBER functions?

    Read the article

  • Stored procedure using cursor in mySql.

    - by RAVI
    I wrote a stored procedure using cursor in mysql but that procedure is taking 10 second to fetch the result while that result set have only 450 records so, I want to know that why that proedure is taking that much time to fetch tha record. procedure as below: DELIMITER // DROP PROCEDURE IF EXISTS curdemo123// CREATE PROCEDURE curdemo123(IN Branchcode int,IN vYear int,IN vMonth int) BEGIN DECLARE EndOfData,tempamount INT DEFAULT 0; DECLARE tempagent_code,tempplantype,tempsaledate CHAR(12); DECLARE tempspot_rate DOUBLE; DECLARE var1,totalrow INT DEFAULT 1; DECLARE cur1 CURSOR FOR select SQL_CALC_FOUND_ROWS ad.agentCode,ad.planType,ad.amount,ad.date from adplan_detailstbl ad where ad.branchCode=Branchcode and (ad.date between '2009-12-1' and '2009-12-31')order by ad.NUM_ID asc; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET EndOfData = 1; DROP TEMPORARY TABLE IF EXISTS temptable; CREATE TEMPORARY TABLE temptable (agent_code varchar(15), plan_type char(12),sale double,spot_rate double default '0.0', dATE DATE); OPEN cur1; SET totalrow=FOUND_ROWS(); while var1 <= totalrow DO fetch cur1 into tempagent_code,tempplantype,tempamount,tempsaledate; IF((tempplantype='Unit Plan' OR tempplantype='MIP') OR tempplantype='STUP') then select spotRate into tempspot_rate from spot_amount where ((monthCode=vMonth and year=vYear) and ((agentCode=tempagent_code and branchCode=Branchcode) and (planType=tempplantype))); INSERT INTO temptable VALUES(tempagent_code,tempplantype,tempamount,tempspot_rate,tempsaledate); else INSERT INTO temptable(agent_code,plan_type,sale,dATE) VALUES(tempagent_code,tempplantype,tempamount,tempsaledate); END IF; SET var1=var1+1; END WHILE; CLOSE cur1; select * from temptable; DROP TABLE temptable; END // DELIMITER ;

    Read the article

  • C# Regex - Match and replace, Auto Increment

    - by Marc Still
    I have been toiling with a problem and any help would be appreciated. Problem: I have a paragraph and I want to replace a variable which appears several times (Variable = @Variable). This is the easy part, but the portion which I am having difficulty is trying to replace the variable with different values. I need for each occurrence to have a different value. For instance, I have a function that does a calculation for each variable. What I have thus far is below: private string SetVariables(string input, string pattern){ Regex rx = new Regex(pattern); MatchCollection matches = rx.Matches(input); int i = 1; if(matches.Count > 0) { foreach(Match match in matches) { rx.Replace(match.ToString(), getReplacementNumber(i)); i++ } } I am able to replace each variable that I need to with the number returned from getReplacementNumber(i) function, but how to I put it back into my original input with the replaced values, in the same order found in the match collection? Thanks in advance! Marcus

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >