Search Results

Search found 18028 results on 722 pages for 'atomic values'.

Page 263/722 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • How can I pass a const array or a variable array to a function in C?

    - by CSharperWithJava
    I have a simple function Bar that uses a set of values from a data set that is passed in in the form of an Array of data structures. The data can come from two sources: a constant initialized array of default values, or a dynamically updated cache. The calling function determines which data is used and should be passed to Bar. Bar doesn't need to edit any of the data and in fact should never do so. How should I declare Bar's data parameter so that I can provide data from either set? union Foo { long _long; int _int; } static const Foo DEFAULTS[8] = {1,10,100,1000,10000,100000,1000000,10000000}; static Foo Cache[8] = {0}; void Bar(Foo* dataSet, int len);//example function prototype Note, this is C, NOT C++ if that makes a difference; Edit Oh, one more thing. When I use the example prototype I get a type qualifier mismatch warning, (because I'm passing a mutable reference to a const array?). What do I have to change for that?

    Read the article

  • Why is Core Data not persisting these changes to disk?

    - by scott
    I added a new entity to my model and it loads fine but no changes made in memory get persisted to disk. My values set on the car object work fine in memory but aren't getting persisted to disk on hitting the home button (in simulator). I am using almost exactly the same code on another entity in my application and its values persist to disk fine (core data - sqlite3); Does anyone have a clue what I'm overlooking here? Car is the managed object, cars in an NSMutableArray of car objects and Car is the entity and Visible is the attribute on the entity which I am trying to set. Thanks for you assistance. Scott - (void)viewDidLoad { myAppDelegate* appDelegate = (myAppDelegate*)[[UIApplication sharedApplication] delegate]; NSManagedObjectContext* managedObjectContex = appDelegate.managedObjectContext; NSFetchRequest* request = [[NSFetchRequest alloc] init]; NSEntityDescription* entity = [NSEntityDescription entityForName:@"Car" inManagedObjectContext:managedObjectContex]; [request setEntity:entity]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Name" ascending:YES]; NSArray* sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError* error = nil; cars = [[managedObjectContex executeFetchRequest:request error:&error] mutableCopy]; if (cars == nil) { NSLog(@"Can't load the Cars data! Error: %@, %@", error, [error userInfo]); } [request release]; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath*)indexPath { Car* car = [cars objectAtIndex:indexPath.row]; if (car.Visible == [NSNumber numberWithBool:YES]) { car.Visible = [NSNumber numberWithBool:NO]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryNone; } else { car.Visible = [NSNumber numberWithBool:YES]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryCheckmark; } }

    Read the article

  • [Delphi] How would you refactor this code?

    - by Al C
    This hypothetical example illustrates several problems I can't seem to get past, even though I keep trying!! ... Suppose the original code is a long event handler, coded in the UI, triggered when a user clicks a cell in a grid. Expressed as pseudocode it's: if Condition1=true then begin //loop through every cell in row, //if aCell/headerCellValue>1 then //color aCell red end else if Condition2=true then begin //do some other calculation adding cell and headerCell values, and //if some other product>2 then //color the whole row green end else show an error message I look at this and say "Ah, refactor to the strategy pattern! The code will be easier to understand, easier to debug, and easier to later extend!" I get that. And I can easily break the code into multiple procedures. The problem is ultimately scope related. Assume the pseudocode makes extensive use of grid properties, values displayed in cells, maybe even built-in grid methods. How do you move all that to another unit, without referencing the grid component in the UI--which would break all the "rules" about loose coupling that make OOP valuable? ... I'm really looking forward to responses. Thanks, as always -- Al C.

    Read the article

  • Oracle ODBC x64 - getting 0 when selecting a number(9) column

    - by MatsL
    I'm having a really weird problem with a third party web service that uses an ODBC connection to Oracle 10.2.0.3.0. I've written a .NET client that generates the same SQL as the web service so I can find out what's going on. The web service is hosted by IIS 6 that's in x64 mode so we use Oracle x64 client. The oracle client version is 10.2.0.1.0. I have a table that looks like this (I've removed some columns and names): SQL> describe tablename; Name Null? Type ----------------------------------------- -------- ---------------------------- KOD VARCHAR2(30) ORDNING NUMBER(5) AVGIFT NUMBER(9) I then in SQL*Plus issue the following statement: SELECT KOD as kod, AVGIFT as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING And I get the following result: KOD RISKPOANG ------------------------------ ---------- Hög risk 55 Mellan risk 35 Låg risk 15 Mycket låg risk 5 But when I execute the exact same SQL using the same DSN on the same machine I get this: Values Kod: Hög risk RiskPoäng: 0 Kod: Mellan risk RiskPoäng: 0 Kod: Låg risk RiskPoäng: 0 Kod: Mycket låg risk RiskPoäng: 0 If I first cast the number to varchar and then back again to number, like this: SELECT KOD as kod, to_number(to_char(AVGIFT, '99'), '9999999999') as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING I get the correct result: Values Kod: Hög risk RiskPoäng: 55 Kod: Mellan risk RiskPoäng: 35 Kod: Låg risk RiskPoäng: 15 Kod: Mycket låg risk RiskPoäng: 5 Has anyone else experiences this? It's incredibly annoying and I'm completely stuck and not sure what to do next. We have a third party web service that use these tables so I must get the original SQL-statement to work since I can't modify its code. And pointers are greatly appreciated! :-) Best regards, Mats

    Read the article

  • How to get pixel information inside a fragment shader?

    - by user697111
    In my fragment shader I can load a texture, then do this: uniform sampler2D tex; void main(void) { vec4 color = texture2D(tex, gl_TexCoord[0].st); gl_FragColor = color; } That sets the current pixel to color value of texture. I can modify these, etc and it works well. But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a : "if currentSelf.Position() == (100,100); then color=red; else color=black?" ? I know how to set colors, but how do I get "my" location? Secondly, how do I get values from a neighbor pixel? I tried this: vec4 nextColor = texture2D(tex, gl_TexCoord[1].st); But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?

    Read the article

  • Loading an XML configuration file BEFORE the flex application loads

    - by Shahar
    Hi, We are using an XML file as an external configuration file for several parameters in our application (including default values for UI components and properties values of some service layer objects). The idea is to be able to load the XML configuration file before the flex application initializes any of its components. This is crucial because XML loading is processed a-synchronously in flex, which can potentially cause race-conditions in the application. For example: the configuration file holds the endpoint URL of a web service used to obtain data from the server. The URL resides in the XML because we want to allow our users to alter the endpoint URL according to their environment. Now because the endpoint URL is retrieved only after the XML has been completely loaded, some of the application's components might be invoking operations on this web service before it is initialized with the correct endpoint. The trivial solution would have been to suspend the initialization of the application until the complete event is dispatched by the loader. But it appears that this solution is far from being trivial. I haven't found a single solution that allows me to load the XML before any other object in the application. Can anyone advice or comment on this matter? Regards, Shahar

    Read the article

  • A graph-based tuple merge?

    - by user1644030
    I have paired values in tuples that are related matches (and technically still in CSV files). Neither of the paired values are necessarily unique. tupleAB = (A####, B###), (A###, B###), (A###, B###)... tupleBC = (B####, C###), (B###, C###), (B###, C###)... tupleAC = (A####, C###), (A###, C###), (A###, C###)... My ideal output would be a dictionary with a unique ID and a list of "reinforced" matches. The way I try to think about it is in a graph-based context. For example, if: tupleAB[x] = (A0001, B0012) tupleBC[y] = (B0012, C0230) tupleAC[z] = (A0001, C0230) This would produce: output = {uniquekey0001, [A0001, B0012, C0230]} Ideally, this would also be able to scale up to more than three tuples (for example, adding a "D" match that would result in an additional three tuples - AD, BD, and CD - and lists of four items long; and so forth). In regards to scaling up to more tuples, I am open to having "graphs" that aren't necessarily fully connected, i.e., every node connected to every other node. My hunch is that I could easily filter based on the list lengths. I am open to any suggestions. I think, with a few cups of coffee, I could work out a brute force solution, but I thought I'd ask the community if anyone was aware of a more elegant solution. Thanks for any feedback.

    Read the article

  • Programming style question on how to code functions

    - by shawnjan
    Hey all! So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example: I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this: listhosts -e testenv -s 2 1 This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one. In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc. tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input? Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have!

    Read the article

  • Pointers to structs

    - by Bobby
    I have the following struct: struct Datastore_T { Partition_Datastores_T cmtDatastores; // bytes 0 to 499 Partition_Datastores_T cdhDatastores; // bytes 500 to 999 Partition_Datastores_T gncDatastores; // bytes 1000 to 1499 Partition_Datastores_T inpDatastores; // bytes 1500 1999 Partition_Datastores_T outDatastores; // bytes 2000 to 2499 Partition_Datastores_T tmlDatastores; // bytes 2500 to 2999 Partition_Datastores_T sm_Datastores; // bytes 3000 to 3499 }; I want to set a char* to struct of this type like so: struct Datastore_T datastores; // Elided: datastores is initialized with data here char* DatastoreStartAddr = (char*)&datastores; memset(DatastoreStartAddr, 0, sizeof(Datastore_T)); The problem I have is that the value that DatastoreStartAddr points to always has a value of zero when it should point to the struct that has been initialized with data. Meaning if I change the values in the struct using the struct directly, the values pointed to by DatastoreStartAddr should also change b/c they are pointing to the same address. But this is not happening. What am I doing wrong?

    Read the article

  • sum not working properly abap

    - by Eva Dias
    I'm trying to sum up some values but it keeps giving me weird values. I'm posting the code to help, and an image too of what is happening. at end of kunnr. soma-waers = <fs_main-waers. soma-wrbtr = <fs_main-wrbtr. soma-fwste = <fs_main-fwste. soma-hwaer = <fs_main-hwaer. soma-dmbtr = <fs_main-dmbtr. soma-hwste = <fs_main-hwste. APPEND soma TO it_soma. LOOP AT it_soma INTO soma. IF sy-tabix = 1. FORMAT COLOR COL_TOTAL INTENSIFIED OFF. SUM. WRITE: "/ sy-uline(137), / sy-vline NO-GAP, 'Subtotal' NO-GAP, '-' NO-GAP, soma-waers, 63 sy-vline NO-GAP, 64 soma-wrbtr NO-GAP, sy-vline NO-GAP, soma-fwste NO-GAP, sy-vline NO-GAP, soma-hwaer NO-GAP, sy-vline NO-GAP, soma-dmbtr NO-GAP, sy-vline NO-GAP, soma-hwste NO-GAP, sy-vline NO-GAP, / sy-uline(137). ELSE. ENDIF. ENDLOOP. ENDAT.

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • retrieve value from hashtable with clone of key; C#

    - by Johnny
    I would like to know if there is any possible way to retrieve an item from a hashtable using a key that is identical to the actual key, but a different object. I understand why it is probably not possible, but I would like to see if there is any tricky way to do it. My problem arises from the fact that, being as stupid as I am, I created hashtables with int[] as the keys, with the integer arrays containing indices representing spatial position. I somehow knew that I needed to create a new int[] every time I wanted to add a new entry, but neglected to think that when I generated spatial coordinate arrays later they would be worthless in retrieving the values from my hashtables. Now I am trying to decide whether to rearrange things so that I can store my values in ArrayLists, or whether to search through the list of keys in the Hashtable for the one I need every time I want to get a value, neither of the options being very cool. Unless of course there is a way to get //1 to work like //2! Thanks in advance. static void Main(string[] args) { Hashtable dog = new Hashtable(); //1 int[] man = new int[] { 5 }; dog.Add(man, "hello"); int[] cat = new int[] { 5 }; Console.WriteLine(dog.ContainsKey(cat)); //false //2 int boy = 5; dog.Add(boy, "wtf"); int kitten = 5; Console.WriteLine(dog.ContainsKey(kitten)); //true; }

    Read the article

  • Fixing parent controller's elements after screen orientation

    - by Jonas Anderson
    I have a tab bar application with mixed orientation support for only some views. One of the child view controller shown from one of the tab's navigation controller is displayed only in Landscape mode. In order to accomplish this, I've done the view transformation for the child view as suggested here: Is there a documented way to set the iPhone orientation? The only problem I'm seeing is that after I've performed the orientation adjustment for the child controller and then readjusted orientation back to normal on its dismissal, the contents of the (parent) navigation controller is still shown with Landscape mode dimensions despite the navigation controller reporting the correct value for the interfaceOrientation. How do I ensure that view's size is reset to match the orientation without hardcoding screen dimensions? I have the following in the root navigation controller's viewWillAppear (invoked after the child controller is dismissed): - (void)viewWillAppear:(BOOL)animated { NSLog(@"viewFrame: (%2f, %2f), width: %2f, height: %2f\n", self.view.frame.origin.x, self.view.frame.origin.y, self.view.frame.size.width, self.view.frame.size.height); // Frame values are (0, 0) for (x,y) width: 320, height: 367 before I // displayed child controller. // Frame values are (0,0) width: 480, height: 219 after returning from child // controller -- still has the landscape dimensions NSLog(@"orientation: %d", self.interfaceOrientation); // reports portrait as expected } I've tried to invoke 'layoutIfNeeded' as well as 'setNeedsDisplay' on the view but neither of them bring the view contents into the correct display. Any suggestions would be greatly appreciated.

    Read the article

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

  • MySQL default value based on view

    - by Jake
    Basically I have a bunch of views based on a simple discriminator column (eg. CREATE VIEW tablename AS SELECT * FROM tablename WHERE discrcolumn = "discriminator value"). Upon inserting a new row into this view, it should insert "discriminator value" into discrcolumn. I tried this, but apparently MySQL doesn't figure this out itself, as it throws an error "Field of view viewname underlying table does not have a default value". The discriminator column is set to NOT NULL of course. How do I mend this? Perhaps a pre-insert trigger? UPDATE: Triggers won't work on views, see below comment. Would it work to create a trigger on the table which uses a variable, and set that variable at establishing the connection? For each connection the value of that variable would be the same, but it could differ from other connections. EDIT: This appears to work... Setup: CREATE TRIGGER insert_[tablename] BEFORE INSERT ON [tablename] FOR EACH ROW SET NEW.[discrcolumn] = @variable Runtime: SET @variable = [descrvalue]; INSERT INTO [viewname] ([columnlist]) VALUES ([values]);

    Read the article

  • How do I query SQLite Database in Android?

    - by kunjaan
    I successfully created the Database and inserted a row however I cannot Query it for some reason. My Droid crashes everytime. String DATABASE_NAME = "myDatabase.db"; String DATABASE_TABLE = "mainTable"; String DATABASE_CREATE = "create table if not exists " + DATABASE_TABLE + " ( value VARCHAR not null);"; SQLiteDatabase myDatabase = null; myDatabase = openOrCreateDatabase(DATABASE_NAME, Context.MODE_PRIVATE, null); myDatabase.execSQL(DATABASE_CREATE); // Create a new row of values to insert. ContentValues newValues = new ContentValues(); // Assign values for each row. newValues.put("value", "kunjan"); // Insert the row into your table myDatabase.insert(DATABASE_TABLE, null, newValues); String[] result_columns = new String[] { "value" }; Cursor allRows = myDatabase.query(true, DATABASE_TABLE, result_columns, null, null, null, null, null, null); if (allRows.moveToFirst()) { String value = allRows.getString(1); TextView foo = (TextView) findViewById(R.id.TextView01); foo.setText(value); } allRows.close(); myDatabase.close();

    Read the article

  • Unique prime factors using HashSet

    - by theGreenCabbage
    I wrote a method that recursively finds prime factors. Originally, the method simply printed values. I am currently trying to add them to a HashSet to find the unique prime factors. In each of my original print statements, I added a primes.add() in order to add that particular integer into my set. My printed output remains the same, for example, if I put in the integer 24, I get 2*2*2*3. However, as soon as I print the HashSet, the output is simply [2]. public static Set<Integer> primeFactors(int n) { Set<Integer> primes = new HashSet<Integer>(); if(n <= 1) { System.out.print(n); primes.add(n); } else { for(int factor = 2; factor <= n; factor++) { if(n % factor == 0) { System.out.print(factor); primes.add(factor); if(factor < n) { System.out.print('*'); primeFactors(n/factor); } return primes; } } } return primes; } I have tried debugging via putting print statements around every line, but was unable to figure out why my .add() was not adding some values into my HashSet.

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • Problems with objectatasource,giving attributes like delete insert and update

    - by kamal
    After going to the process of adding the various attributes like insert,delete and update.But when i run it through the browser ,editing works but updating and deleting doesn't !(for the update and shows the same thing for delete,my friends think i need to use codes to repair the problems,can you help me please.it shows this: Server Error in '/WebSite3' Application. ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id.] System.Web.UI.WebControls.ObjectDataSourceView.GetResolvedMethodData(Type type, String methodName, IDictionary allParameters, DataSourceOperation operation) +1119426 System.Web.UI.WebControls.ObjectDataSourceView.ExecuteUpdate(IDictionary keys, IDictionary values, IDictionary oldValues) +1008 System.Web.UI.DataSourceView.Update(IDictionary keys, IDictionary values, IDictionary oldValues, DataSourceViewOperationCallback callback) +92 System.Web.UI.WebControls.GridView.HandleUpdate(GridViewRow row, Int32 rowIndex, Boolean causesValidation) +907 System.Web.UI.WebControls.GridView.HandleEvent(EventArgs e, Boolean causesValidation, String validationGroup) +704 System.Web.UI.WebControls.GridView.OnBubbleEvent(Object source, EventArgs e) +95 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.GridViewRow.OnBubbleEvent(Object source, EventArgs e) +123 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.LinkButton.OnCommand(CommandEventArgs e) +118

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • INSERT stored procedure does not work?

    - by vikitor
    Hello, I'm trying to make an insertion from one database called suspension to the table called Notification in the ANimals database. My stored procedure is this: ALTER PROCEDURE [dbo].[spCreateNotification] -- Add the parameters for the stored procedure here @notRecID int, @notName nvarchar(50), @notRecStatus nvarchar(1), @notAdded smalldatetime, @notByWho int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO Animals.dbo.Notification values (@notRecID, @notName, @notRecStatus, null, @notAdded, @notByWho); END The null inserting is to replenish one column that otherwise will not be filled, I've tried different ways, like using also the names for the columns after the name of the table and then only indicate in values the fields I've got. I know it is not a problem of the stored procedure because I executed it from the sql server management studio and it works introducing the parameters. Then I guess the problem must be in the repository when I call the stored procedure: public void createNotification(Notification not) { try { DB.spCreateNotification(not.NotRecID, not.NotName, not.NotRecStatus, (DateTime)not.NotAdded, (int)not.NotByWho); } catch { return; } } It does not record the value in the database. I've been debugging and getting mad about this, because it works when I execute it manually, but not when I automatize the proccess in my application. Does anyone see anything wrong with my code? Thank you

    Read the article

  • How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

    - by psihodelia
    Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well. I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers. The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.

    Read the article

  • How can I make nested string splits?

    - by Statement
    I have what seemed at first to be a trivial problem but turned out to become something I can't figure out how to easily solve. I need to be able to store lists of items in a string. Then those items in turn can be a list, or some other value that may contain my separator character. I have two different methods that unpack the two different cases but I realized I need to encode the contained value from any separator characters used with string.Split. To illustrate the problem: string[] nested = { "mary;john;carl", "dog;cat;fish", "plainValue" } string list = string.Join(";", nested); string[] unnested = list.Split(';'); // EEK! returns 7 items, expected 3! This would produce a list "mary;john;carl;dog;cat;fish;plainValue", a value I can't split to get the three original nested strings from. Indeed, instead of the three original strings, I'd get 7 strings on split and this approach thus doesn't work at all. What I want is to allow the values in my string to be encoded so I can unpack/split the contents just the way before I packed/join them. I assume I might need to go away from string.Split and string.Join and that is perfectly fine. I might just have overlooked some useful class or method. How can I allow any string values to be packed / unpacked into lists? I prefer neat, simple solutions over bulky if possible. For the curious mind, I am making extensions for PlayerPrefs in Unity3D, and I can only work with ints, floats and strings. Thus I chose strings to be my data carrier. This is why I am making this nested list of strings.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >