Search Results

Search found 269 results on 11 pages for 'bigint'.

Page 1/11 | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • Adding XOR function to bigint library

    - by Jason Gooner
    Hi, I'm using this Big Integer library for Javascript: http://www.leemon.com/crypto/BigInt.js and I need to be able to XOR two bigInts together and sadly the library doesn't include such a function. The library is relatively simple so I don't think it's a huge task, just confusing. I've been trying to hack one together but not having much luck, would be very grateful if someone could lend me a hand. This is what I've attempted (might be wrong). But im guessing the structure is going to be quite similar to some of the other functions in there. function xor(x, y) { var c, k, i; var result = new Array(0); // big int for result k=x.length>y.length ? x.length : y.length; // array length of the larger num // Make sure result is the correct array size? maybe: result = expand(result, k); // ? for (c=0, i=0; i < k; i++) { // Do some xor here } // return the bigint xor result return result; } What confuses me is I don't really understand how it stores numbers in the array blocks for the bigInt. I don't think it's a case of simply bigintC[i] = bigintA[i] ^ bigintB[i], then most other functions have some masking operation at the end that I don't understand. I would really appreciate any help getting this working. Thanks

    Read the article

  • How to retrieve a bigint from a database and place it into an Int64 in SSIS

    - by b0fh
    I ran into this problem a couple years back and am hoping there has been a fix and I just don't know about it. I am using an 'Execute SQL Task' in the Control Flow of an SSIS package to retrieve a 'bigint' ID value. The task is supposed to place this in an Int64 SSIS variable but I getting the error: "The type of the value being assigned to variable "User::AuditID" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object." When I brought this to MS' attention a couple years back they stated that I had to 'work around' this by placing the bigint into an SSIS object variable and then converting the value to Int64 as needed. Does anyone know if this has been fixed or do I still have to 'work around' this mess? edit: Server Stats Product: Microsoft SQL Server Enterprise Edition Operating System: Microsoft Windows NT 5.2 (3790) Platform: NT INTEL X86 Version: 9.00.1399.06

    Read the article

  • What data stucture should I use for BigInt class

    - by user1086004
    I would like to implement a BigInt class which will be able to handle really big numbers. I want only to add and multiply numbers, however the class should also handle negative numbers. I wanted to represent the number as a string, but there is a big overhead with converting string to int and back for adding. I want to implement addition as on the high school, add corresponding order and if the result is bigger than 10, add the carry to next order. Then I thought that it would be better to handle it as a array of unsigned long long int and keep the sign separated by bool. With this I'm afraid of size of the int, as C++ standard as far as I know guarantees only that int < float < double. Correct me if I'm wrong. So when I reach some number I should move in array forward and start adding number to the next array position. Is there any data structure that is appropriate or better for this? Thanks in advance.

    Read the article

  • C# A random BigInt generator

    - by Tony
    Hi, I'm about to implement the DSA algorithm, but there is a problem: choose "p", a prime number with L bits, where 512 <= L <= 1024 and L is a multiple of 64 How to implement a random generator of that number? Int64 has "only" 63 bits length

    Read the article

  • Can T-SQL store ulong's?

    - by Onion-Knight
    Title pretty much says it. I want to store a C#.NET ulong into a T-SQL database. I don't see any provisions for doing this, as the SQL bigint has the same Min/Max values as a normal long. Is there any way I can do this? Or is catching an OverflowException my only hope?

    Read the article

  • Programming language for fast calculations with big integers

    - by sub
    I'm doing Project Euler problems at the moment and I can solve most of them using my own programming language which uses direct C++ integers (so they are bound to 2^32 on my machine). However, at times there are problems which require me to work with very high numbers, I can't do that with native integers. So I implemented a BigInt library in my language which unfortunately gets extremely slow at times. Is there a programming language suitable for very efficient handling of big numbers? I mean that I want to do the things I could do in other programming languages with it (variables, loops, etc.), but in a faster way. If you have got tips for workarounds of the 2^32 limit in my language/C++/other languages, please tell me too!

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • how to specify a BIGINT in a rails scaffold?

    - by webdestroya
    I am trying to create a model in ruby that uses a BIGINT datatype (as opposed to the INT done by :integer). I have search all over Google, but all I seem to find is "run an SQL statement to alter the table to a BIGINT" - This seems a bit hack-ish to me, so I wanted to know if there was a way to specify a bigint in the ruby system like :big_int or something Any ideas?

    Read the article

  • how to specify a BIGINT in a ruby scaffold?

    - by webdestroya
    I am trying to create a model in ruby that uses a BIGINT datatype (as opposed to the INT done by :integer). I have search all over Google, but all I seem to find is "run an SQL statement to alter the table to a BIGINT" - This seems a bit hack-ish to me, so I wanted to know if there was a way to specify a bigint in the ruby system like :big_int or something Any ideas?

    Read the article

  • Why doesn't my processor have built-in BigInt support?

    - by ol
    As far as I understood it, BigInts are usually implemented in most programming languages as strings containing numbers, where, eg.: when adding two of them, each digit is added one after another like we know it from school, e.g.: 246 816 * * ---- 1062 Where * marks that there was an overflow. I learned it this way at school and all BigInt adding functions I've implemented work similar to the example above. So we all know that our processors can only natively manage ints from 0 to 2^32 / 2^64. That means that most scripting languages in order to be high-level and offer arithmetics with big integers, have to implement/use BigInt libraries that work with integers as strings like above. But of course this means that they'll be far slower than the processor. So what I've asked myself is: Why doesn't my processor have a built-in BigInt function? It would work like any other BigInt library, only (a lot) faster and at a lower level: Processor fetches one digit from the cache/RAM, adds it, and writes the result back again. Seems like a fine idea to me, so why isn't there something like that?

    Read the article

  • Why is is giving me an SQL syntax error?

    - by Tibo
    Do you have any idea why i get this: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '``, `title` varchar(255) collate latin1_general_ci NOT NULL default ``,' at line 3 The code is like this (the part im having problem with...) $sql = 'CREATE TABLE `forum` ( `postid` bigint(20) NOT NULL auto_increment, `author` varchar(255) collate latin1_general_ci NOT NULL default ``, `title` varchar(255) collate latin1_general_ci NOT NULL default ``, `post` mediumtext collate latin1_general_ci NOT NULL, `showtime` varchar(255) collate latin1_general_ci NOT NULL default ``, `realtime` bigint(20) NOT NULL default `0`, `lastposter` varchar(255) collate latin1_general_ci NOT NULL default ``, `numreplies` bigint(20) NOT NULL default `0`, `parentid` bigint(20) NOT NULL default `0`, `lastrepliedto` bigint(20) NOT NULL default `0`, `author_avatar` varchar(30) collate latin1_general_ci NOT NULL default `default`, `type` varchar(2) collate latin1_general_ci NOT NULL default `1`, `stick` varchar(6) collate latin1_general_ci NOT NULL default `0`, `numtopics` bigint(20) NOT NULL default `0`, `cat` bigint(20) NOT NULL, PRIMARY KEY (`postid`) );'; mysql_query($sql,$con) or die(mysql_error()); Help would be greatly appreciated!

    Read the article

  • What is the benefit of using int instead of bigint in this case?

    - by Yeti
    (MYSQL n00b) I have 3 tables: id = int(10), photo_id = bigint(20) PHOTO records limited to 3 million PHOTO: +-------+-----------------+ | id | photo_num | +-------+-----------------+ | 1 | 123456789123 | | 2 | 987654321987 | | 3 | 5432167894321 | +-------+-----------------+ COLOR: +-------+-----------------+---------+ | id | photo_num | color | +-------+-----------------+---------+ | 1 | 123456789123 | red | | 2 | 987654321987 | blue | | 3 | 5432167894321 | green | +-------+-----------------+---------+ SIZE: +-------+-----------------+---------+ | id | photo_num | size | +-------+-----------------+---------+ | 1 | 123456789123 | large | | 2 | 987654321987 | small | | 3 | 5432167894321 | medium | +-------+-----------------+---------+ Both COLOR and SIZE tables will have several million records. Q1: Is it better to change photo_num on COLOR and SIZE to int(10) and point it to PHOTO's id? Right now I use these: (PHOTO is no where in the picture) SELECT * from COLOR WHERE photo_num='xxx'; SELECT * from SIZE WHERE photo_num='xxx'; Q2: How will the SELECT query look if PHOTO id was used in COLOR, SIZE?

    Read the article

  • MSSQL Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on MS SQL. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have thousands of tables in your MS SQL? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • SQL Server Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on SQL Server. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have hundreds of thousands of tables in your SQL Server? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • Adding negative and positive numbers in java without BigInt.

    - by liamg
    Hi, i'm trying to write a small java class. I have an object called BigNumber. I wrote method that add two positive numbers, and other method that subract two also positive numbers. Now i want them to handle negative numbers. So the i wrote couple of 'if' statements eg. if (this.sign == 1 /* means '+' */) { if (sn1.sign == 1) { if (this.compare(sn1) == -1 /* means this < sn1 */ ) return sn1.add(this); else return this.add(sn1); } etc. Unfortunately the code looks just ugly. Like a bush of if's and elses. Is there a better way to write that kind of code? Edit i can't just do this.add(sn1) beacuse sometimes i want to add positive number to negative one or negitve to negative. But add can handle only positive numbers. So i have to use basic math and for example: instead of add negative number to negative number i add this.abs() (absolute value of number) to sn1.abs() and return the result with opposite sign. Drew: this lines are from method _add. I use this method to decide what to do with numbers it receive. Send them to add method? Or send them to subract method but with different order (sn1.subtract(this))? And so on.. if (this.sign == 1) { if (sn1.sign == 1) { if (this.compare(sn1) == -1) return sn1.add(this); else return this.add(sn1); } else if (wl1.sign == 0) return this; else { if (this.compare(sn1.abs()) == 1) return this.subtract(sn1.abs()); else if (this.compare(sn1.abs()) == 0) return new BigNumber(0); else return sn1.abs().subtract(this).negate(); // return the number with opposite sign; } } else if (this.sign == 0) return sn1; else { if (wl1.sign == 1) { if (this.abs().compare(sn1) == -1) return sn1.subtract(this.abs()); else if (this.abs().compare(sn1) == 0) return new BigNumber(0); else return this.abs().subtract(sn1).negate(); } else if (sn1.sign == 0) return this; else return (this.abs().add(wl1.abs())).negate(); } As you can see - this code looks horrible..

    Read the article

  • Can I use anything other than BIGINT as Primary Key data type in SQLite?

    - by weenet
    I was psyched about the possibility of using SQLite as a database solution during development so that I could focus on writing the code first and dynamically generating the db at runtime using NHibernate's ShemaExport functionality. However, I'm running into a few issues, not the least of which is that it seems that SQLite requires me to use Int64 for my primary keys (vs, say, Int32 or Guid). Is there any way around this?

    Read the article

  • Scala compiler says unreachable code, why?

    - by taotree
    I'm new to Scala... Here's the code: def ack2(m: BigInt, n: BigInt): BigInt = { val z = BigInt(0) (m,n) match { case (z,_) => n+1 case (_,z) => ack2(m-1,1) // Compiler says unreachable code on the paren of ack2( case _ => ack2(m-1, ack2(m, n-1)) // Compiler says unreachable code on the paren of ack2( } } I'm trying to understand that... why is it giving that error?

    Read the article

  • Recursive COUNT Query (SQL Server)

    - by Cosmo
    Hello Guys! I've two MS SQL tables: Category, Question. Each Question is assigned to exactly one Category. One Category may have many subcategories. Category Id : bigint (PK) Name : nvarchar(255) AcceptQuestions : bit IdParent : bigint (FK) Question Id : bigint (PK) Title : nvarchar(255) ... IdCategory : bigint (FK) How do I recursively count all Questions for a given Category (including questions in subcategories). I've tried it already based on several tutorials but still can't figure it out :(

    Read the article

  • Recursive COUNT Query (MS SQL)

    - by Cosmo
    Hello Guys! I've two MS SQL tables: Category, Question. Each Question is assigned to exactly one Category. One Category may have many subcategories. Category Id : bigint (PK) Name : nvarchar(255) AcceptQuestions : bit IdParent : bigint (FK) Question Id : bigint (PK) Title : nvarchar(255) ... IdCategory : bigint (FK) How do I recursively count all Questions for a given Category (including questions in subcategories). I've tried it already based on several tutorials but still can't figure it out :(

    Read the article

  • DMV {dm_os_ring_buffers} - Queries to help pinpoint current Issues / usual usage patterns

    - by NeilHambly
    I'm been running some queries (below) to help me identify when I have had time-sensitive performance issues around Memory/CPU, I didn't want to load up additional overhead to the system (unless absolutely neccessary) using traces or profiler  - naturally we have various methods to do this Perfmon counters, DBCC, DMVs etc.. One quick way I like is to run a few DMV queries (normally back in seconds) to help me find those RECENT specific time periods when the system has been substantially changed in some way using, this is using the DMV dm_os_ring_buffers This one helps me identify when I'm expericing Timeout Errors (1222).. modiy code to look for other error as highlight belowDECLARE @ts_now BIGINT,@dt_max BIGINT, @dt_min BIGINT  SELECT @ts_now = cpu_ticks / CONVERT(FLOAT, cpu_ticks_in_ms) FROM sys.dm_os_sys_info SELECT @dt_max = MAX(timestamp), @dt_min = MIN(timestamp)    FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'  SELECT       record_id      ,DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime      ,y.Error      ,UserDefined      ,b.description as NormalizedText FROM       (       SELECT       record.value('(./Record/@id)[1]', 'int')                    AS record_id,       record.value('(./Record/Exception/Error)[1]', 'int')        AS Error,       record.value('(./Record/Exception/UserDefined)[1]', 'int')  AS UserDefined,      TIMESTAMP       FROM             (             SELECT TIMESTAMP, CONVERT(XML, record) AS record             FROM sys.dm_os_ring_buffers             WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'             AND record LIKE '% %'            ) AS x      ) AS y INNER JOIN sys.sysmessages b on y.Error = b.error WHERE b.msglangid = 1033 and  y.Error = 1222 ORDER BY record_id DESC Sample Output record_id EventTime Error UserDefined NormalizedText 15199195 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199194 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199193 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199192 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199191 18/03/2010 14:00 1222 0 Lock request time out period exceeded.  This one helps me identify when I have Unusally High Processing (> 50%) or # Page-FaultsSELECT record.value('(./Record/@id)[1]', 'int') AS record_id,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')              AS SystemIdle,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')      AS SQLProcessUtilization,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')         AS UserModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')       AS KernelModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'bigint')           AS PageFaults,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]', 'bigint')      AS WorkingSetDelta,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int')       AS MemoryUtilization,TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'        AND record LIKE '% %'         ) AS x Example: Showing entries > 50% SQL CPU record_id SystemIdle SQLProcessUtilization UserModeTime KernelModeTime PageFaults WorkingSetDelta MemoryUtilization TIMESTAMP 111916 66 29 36718750 1374843750 21333 -40960 100 7991061289 111917 54 41 50156250 1954062500 26914 -28672 100 7991121290 111918 57 39 42968750 1838437500 30096 20480 100 7991181290 111919 41 53 43906250 2530156250 22088 -4096 100 7991241307 111920 48 45 40937500 2124062500 26395 8192 100 7991301310 111921 52 43 35625000 2052812500 21996 155648 100 7991361311 111922 40 55 36875000 2637343750 33355 -262144 100 7991421311 111923 36 58 44843750 2786562500 47019 28672 100 7991481311 111924 31 64 53437500 3046562500 31027 61440 100 7991541314 111925 36 57 43906250 2711250000 37074 -8192 100 7991601317 111926 52 43 43437500 2060156250 29176 20480 100 7991661318 111927 71 24 33750000 1141250000 14478 16384 100 7991721320 111928 71 23 34531250 1116250000 12711 -20480 100 7991781320 111929 53 36 46562500 1714062500 26684 200704 100 7991841323 Finally one to provide some understanding of the level of memory state changes that are ocuringSELECT record.value('(./Record/@id)[1]', 'int')                                                       AS 'record_id',record.value('(./Record/ResourceMonitor/Notification)[1]', 'VARCHAR(100)')                     AS 'ReservedMemory',record.value('(./Record/ResourceMonitor/Indicators)[1]', 'int')                                AS 'Indicators',record.value('(./Record/ResourceMonitor/Effect/@state)[1]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[1]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[1]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[2]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[2]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[2]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[3]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[3]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[3]', 'VARCHAR(100)')                           AS 'REVERT_HIGHPM',record.value('(./Record/MemoryNode/ReservedMemory)[1]', 'int')                                 AS 'ReservedMemory',record.value('(./Record/MemoryNode/CommittedMemory)[1]', 'int')                                AS 'CommittedMemory',record.value('(./Record/MemoryNode/SharedMemory)[1]', 'int')                                   AS 'SharedMemory',record.value('(./Record/MemoryNode/AWEMemory)[1]', 'int')                                      AS 'AWEMemory',record.value('(./Record/MemoryNode/SinglePagesMemory)[1]', 'int')                              AS 'SinglePagesMemory',record.value('(./Record/MemoryNode/CachedMemory)[1]', 'int')                                   AS 'CachedMemory',record.value('(./Record/MemoryRecord/MemoryUtilization)[1]', 'int')                            AS 'MemoryUtilization',record.value('(./Record/MemoryRecord/TotalPhysicalMemory)[1]', 'int')                          AS 'TotalPhysicalMemory',record.value('(./Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'int')                      AS 'AvailablePhysicalMemory',record.value('(./Record/MemoryRecord/TotalPageFile)[1]', 'int')                                AS 'TotalPageFile',record.value('(./Record/MemoryRecord/AvailablePageFile)[1]', 'int')                            AS 'AvailablePageFile',record.value('(./Record/MemoryRecord/TotalVirtualAddressSpace)[1]', 'bigint')                  AS 'TotalVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint')              AS 'AvailableVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableExtendedVirtualAddressSpace)[1]', 'bigint')      AS 'AvailableExtendedVirtualAddressSpace', TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_RESOURCE_MONITOR'        AND record LIKE '% %'        ) AS x  

    Read the article

1 2 3 4 5 6 7 8 9 10 11  | Next Page >