Search Results

Search found 4919 results on 197 pages for 'integer'.

Page 8/197 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Drawbacks of using an integer as a bitfield?

    - by Mark
    I have a bunch of boolean options for things like "accepted payment types" which can include things like cash, credit card, cheque, paypal, etc. Rather than having a half dozen booleans in my DB, I can just use an integer and assign each payment method an integer, like so PAYMENT_METHODS = ( (1<<0, 'Cash'), (1<<1, 'Credit Card'), (1<<2, 'Cheque'), (1<<3, 'Other'), ) and then query the specific bit in python to retrieve the flag. I know this means the database can't index by specific flags, but are there any other drawbacks?

    Read the article

  • Updating an integer defined column in a MySQL DB with PHP

    - by Zloy Smiertniy
    I have a table defined like this: create table users ( id int(10), age int(3), name varchar (50) ) I want to use a query to update just age, which is an integer, that comes from an html form. When it arrives to the method that updates it, it comes as a string, so I change it to integer with PHP and try to update the table, but it doesn't work $age = intval($age); $q2 = "UPDATE users SET age='$age' where username like '$username';"; mysql_query($q2,$con);

    Read the article

  • [C++] Converting Integer to a Byte Representation

    - by bobber205
    How can I convert a integer to it's byte representation. I want to take an integer and return a vector that has contains 1's and 0's of the integers byte representation. I'm having a heck of a time trying to do this myself so I'd thought I would ask to see if there was a built in library function that could help. Thanks!

    Read the article

  • new Integer vs valueOf

    - by LB
    Hi, I was using Sonar to make my code cleaner, and it pointed that I'm using new Integer(1) instead of Integer.valueOf(1). Because it seems that valueOf does not instantiate a new object so is more memory-friendly. How can valueOf not instantiate a new object ? How does it work ? Is this true for all integers ? thanks.

    Read the article

  • ON DELETE RESTRICT Causing Error 150

    - by Levi Hackwith
    CREATE TABLE project ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, name VARCHAR(75) NOT NULL, description LONGTEXT NOT NULL, is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id), INDEX(name, created_at) ) ENGINE = INNODB; CREATE TABLE role ( id INTEGER NOT NULL, name VARCHAR(50) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE organization ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, name VARCHAR(100) NOT NULL, is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE user ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, role_id INTEGER NOT NULL, organization_id INTEGER NOT NULL, last_login_at DATETIME NOT NULL, last_ip_address VARCHAR(25) NOT NULL, username VARCHAR(45) NOT NULL, password CHAR(32) NOT NULL, email_address VARCHAR(255) NOT NULL, first_name VARCHAR(45) NOT NULL, last_name VARCHAR(45) NOT NULL, address_1 VARCHAR(100) NOT NULL, address_2 VARCHAR(25) NULL, city VARCHAR(25) NOT NULL, state CHAR(2) NOT NULL, zip_code VARCHAR(10) NOT NULL, primary_phone_number VARCHAR(10) NOT NULL, secondary_phone_number VARCHAR(10) NOT NULL, is_primary_organization_contact TINYINT NOT NULL DEFAULT '0', is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id), CONSTRAINT fk_user_role_id FOREIGN KEY (role_id) REFERENCES role (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_user_organization_id FOREIGN KEY (organization_id) REFERENCES organization (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE project_user ( user_id INTEGER NOT NULL, project_id INTEGER NOT NULL, PRIMARY KEY (user_id, project_id), CONSTRAINT fk_project_user_user_id FOREIGN KEY (user_id) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE CASCADE, CONSTRAINT fk_project_user_project_id FOREIGN KEY (project_id) REFERENCES project (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE ticket_category ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket_type ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket_status ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, project_id INTEGER NOT NULL, created_by INTEGER NOT NULL, submitted_by INTEGER NOT NULL, assigned_to INTEGER NULL, category_id INTEGER NOT NULL, type_id INTEGER NOT NULL, title VARCHAR(75) NOT NULL, description LONGTEXT NOT NULL, contact_type_id TINYINT NOT NULL, affects_all_clients TINYINT NOT NULL DEFAULT '0', is_billable TINYINT NOT NULL DEFAULT '1', esimated_hours DECIMAL(4, 1) NOT NULL DEFAULT '0', hours_worked DECIMAL (4, 1) NOT NULL DEFAULT '0', status_id TINYINT NOT NULL, PRIMARY KEY (id), CONSTRAINT fk_ticket_project_id FOREIGN KEY (project_id) REFERENCES project (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_created_by FOREIGN KEY (created_by) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_submitted_by FOREIGN KEY (submitted_by) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_assigned_to FOREIGN KEY (assigned_to) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_category_id FOREIGN KEY (category_id) REFERENCES ticket_category (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_type_id FOREIGN KEY (type_id) REFERENCES ticket_type (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_status_id FOREIGN KEY (status_id) REFERENCES ticket_status (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE ticket_time_entry ( id INTEGER NOT NULL AUTO_INCREMENT, user_id INTEGER NOT NULL, ticket_id INTEGER NOT NULL, started_at DATETIME NOT NULL, ended_at DATETIME NOT NULL, PRIMARY KEY (id), CONSTRAINT fk_ticket_time_entry_user_id FOREIGN KEY (user_id) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_time_entry_ticket_id FOREIGN KEY (ticket_id) REFERENCES ticket (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; The ticket table's create statement causes an error 150. I have no clue why. When I remove the ON DELETE RESTRICT statements from the table declaration, it works. Why is that?

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • Get the signed/unsigned variant of an integer template parameter without explicit traits

    - by Blair Holloway
    I am looking to define a template class whose template parameter will always be an integer type. The class will contain two members, one of type T, and the other as the unsigned variant of type T -- i.e. if T == int, then T_Unsigned == unsigned int. My first instinct was to do this: template <typename T> class Range { typedef unsigned T T_Unsigned; // does not compile public: Range(T min, T_Unsigned range); private: T m_min; T_Unsigned m_range; }; But it doesn't work. I then thought about using partial template specialization, like so: template <typename T> struct UnsignedType {}; // deliberately empty template <> struct UnsignedType<int> { typedef unsigned int Type; }; template <typename T> class Range { typedef UnsignedType<T>::Type T_Unsigned; /* ... */ }; This works, so long as you partially specialize UnsignedType for every integer type. It's a little bit of additional copy-paste work (slash judicious use of macros), but serviceable. However, I'm now curious - is there another way of determining the signed-ness of an integer type, and/or using the unsigned variant of a type, without having to manually define a Traits class per-type? Or is this the only way to do it?

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • WPF: Binding an integer to a TextBlock with TemplateBinding

    - by haagel
    I have a custom control in WPF. In this I have a DependencyProperty of the type integer. In the template for the custom control I have a TextBlock, I and would like to show the value of the integer in the TextBlock. But I can't get it to work. I'm using TemplateBinding. If I use the same code but change the type of the DependencyProperty to string it works fine. But I really want it to be an integer for the rest of my application to work. How can I do this? I've written simplified code that shows the problem. First the custom control: public class MyCustomControl : Control { static MyCustomControl() { DefaultStyleKeyProperty.OverrideMetadata(typeof(MyCustomControl), new FrameworkPropertyMetadata(typeof(MyCustomControl))); MyIntegerProperty = DependencyProperty.Register("MyInteger", typeof(int), typeof(MyCustomControl), new FrameworkPropertyMetadata(0)); } public int MyInteger { get { return (int)GetValue(MyCustomControl.MyIntegerProperty); } set { SetValue(MyCustomControl.MyIntegerProperty, value); } } public static readonly DependencyProperty MyIntegerProperty; } And this is my default template: <Style TargetType="{x:Type local:MyCustomControl}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:MyCustomControl}"> <Border BorderThickness="1" CornerRadius="4" BorderBrush="Black" Background="Azure"> <StackPanel Orientation="Vertical"> <TextBlock Text="{TemplateBinding MyInteger}" HorizontalAlignment="Center" /> </StackPanel> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> What am I doing wrong? Thanks // David

    Read the article

  • How to handle array element between int and Integer

    - by masato-san
    First, it is long post so if you need clarification please let me know. I'm new to Java and having difficulty deciding whether I should use int[] or Integer[]. I wrote a function that find odd_number from int[] array. public int[] find_odd(int[] arr) { int[] result = new int[arr.length]; for(int i=0; i<arr.length; i++) { if(arr[i] % 2 != 0) { //System.out.println(arr[i]); result[i] = arr[i]; } } return result; } Then, when I pass the int[] array consisting of some integer like below: int[] myArray = {-1, 0, 1, 2, 3}; int[] result = find_odd(myArray); The array "result" contains: 0, -1, 0, 1, 0, 3 Because in Java you have to define the size of array first, and empty int[] array element is 0 not null. So when I want to test the find_odd() function and expect the array to have odd numbers (which it does) only, it throws the error because the array also includes 0s representing "empty cell" as shown above. My test code: public void testFindOddPassValidIntArray() { int[] arr = {-2, -1, 0, 1, 3}; int[] result = findOddObj.find_odd(arr); //check if return array only contain odd number for(int i=0; i<result.length; i++) { if(result[i] != null) { assert is_odd(result[i]) : result[i]; } } } So, my question is should I use int[] array and add check for 0 element in my test like: if(result[i] != 0) { assert is_odd(result[i] : result[i] } But in this case, if find_odd() function is broken and returning 0 by miscalculation, I can't catch it because my test code would only assume that 0 is empty cell. OR should I use Integer[] array whose default is null? How do people deal with this kind of situation?

    Read the article

  • Java convert time format to integer or long

    - by behrk2
    Hello, I'm wondering what the best method is to convert a time string in the format of 00:00:00 to an integer or a long? My ultimate goal is to be able to convert a bunch of string times to integers/longs, add them to an array, and find the most recent time in the array... I'd appreciate any help, thanks! Ok, based on the answers, I have decided to go ahead and compare the strings directly. However, I am having some trouble. It is possible to have more than one "most recent" time, that is, if two times are equal. If that is the case, I want to add the index of both of those times to an ArrayList. Here is my current code: days[0] = "15:00:00"; days[1] = "17:00:00"; days[2] = "18:00:00"; days[3] = "19:00:00"; days[4] = "19:00:00"; days[5] = "15:00:00"; days[6] = "13:00:00"; ArrayList<Integer> indexes = new ArrayList<Integer>(); String curMax = days[0]; for (int x = 1; x < days.length1; x++) { if (days[x].compareTo(curMax) > 0) { curMax = days[x]; indexes.add(x); System.out.println("INDEX OF THE LARGEST VALUE: " + x); } } However, this is adding index 1, 2, and 3 to the ArrayList... Can anyone help me?

    Read the article

  • finding N contiguous zero bits in an integer to the left of the MSB from another

    - by James Morris
    First we find the MSB of the first integer, and then try to find a region of N contiguous zero bits within the second number which is to the left of the MSB from the first integer. Here is the C code for my solution: typedef unsigned int t; unsigned const t_bits = sizeof(t) * CHAR_BIT; _Bool test_fit_within_left_of_msb( unsigned width, t val1, t val2, unsigned* offset_result) { unsigned offbit = 0; unsigned msb = 0; t mask; t b; while(val1 >>= 1) ++msb; while(offbit + width < t_bits - msb) { mask = (((t)1 << width) - 1) << (t_bits - width - offbit); b = val2 & mask; if (!b) { *offset_result = offbit; return true; } if (offbit++) /* this conditional bothers me! */ b <<= offbit - 1; while(b <<= 1) offbit++; } return false; } Aside from faster ways of finding the MSB of the first integer, the commented test for a zero offbit seems a bit extraneous, but necessary to skip the highest bit of type t if it is set. I have also implemented similar algorithms but working to the right of the MSB of the first number, so they don't require this seemingly extra condition. How can I get rid of this extra condition, or even, are there far more optimal solutions?

    Read the article

  • XSLT: replace an integer by a string

    - by binogure
    I have a little problem. A node in my XML may contains and integer, and i have to replace this integer by a string. Each number match with a string. For example i have: Integer - String 1 - TODO 2 - IN PROGRESS 3 - DONE 4 - ERROR 5 - ABORTED Original XML: <root> <status>1</status> </root> Converted XML: <root> <status>TODO</status> </root> So i want replace 1 by "TODO", 2 by "IN PROGRESS" ... <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/root/status"> <root> <status> <xsl:variable name="text" select="." /> <xsl:choose> <xsl:when test="contains($text, '1')"> <xsl:value-of select="'TODO'"/> </xsl:when> <xsl:otherwise> <xsl:value-of select="$text"/> </xsl:otherwise> </xsl:choose> </status></root> </xsl:template> </xsl:stylesheet> I'am asking if there is another way to do that.

    Read the article

  • get next available integer using LINQ

    - by Daniel Williams
    Say I have a list of integers: List<int> myInts = new List<int>() {1,2,3,5,8,13,21}; I would like to get the next available integer, ordered by increasing integer. Not the last or highest one, but in this case the next integer that is not in this list. In this case the number is 4. Is there a LINQ statement that would give me this? As in: var nextAvailable = myInts.SomeCoolLinqMethod(); Edit: Crap. I said the answer should be 2 but I meant 4. I apologize for that! For example: Imagine that you are responsible for handing out process IDs. You want to get the list of current process IDs, and issue a next one, but the next one should not just be the highest value plus one. Rather, it should be the next one available from an ordered list of process IDs. You could get the next available starting with the highest, it does not really matter.

    Read the article

  • need help with parseInt

    - by cmona
    hello, how can i get for example the integer codeInt=082 from String code='A082' i have tried this: int codeInt = Integer.parseInt(code.substring(1,4)); and i get codeInt=82 ,it leaves the first 0 but i want the full code '082'. i thought of parseInt(String s, int radix) but i don't know how . any help will be appreciated . thanks.

    Read the article

  • Intger encoding and decoding problem

    - by aASDASD
    I have a long list of integers, and i need to reduce this down to a single integer. The integer list can be anywhere from 0 to 300 ints long (about). I need to be able to encode/decode. Is there a better option than a lookup table?

    Read the article

  • Crack Protected Excel Sheet

    - by Gino Abraham
    The following snippet will allow you to unprotect an excel sheet which is protected using a password. press Alt + F11 when the excel is open, it will open up VB macro editor. Insert a Module and copy the following code. this will clear the password and you will be free to edit the file. if you want to unprotect the workbook rather the a work sheet change ActiveSheet to ThisWorkbook in the following code. Happy Cracking    Sub CrackPassword()       Dim v1 As Integer, u1 As Integer, w1 As Integer   Dim v2 As Integer, u2 As Integer, w2 As Integer   Dim v3 As Integer, u3 As Integer, w3 As Integer   Dim v4 As Integer, u4 As Integer, w4 As Integer   On Error Resume Next     For v1 = 65 To 66: For u1 = 65 To 66: For w1 = 65 To 66   For v2 = 65 To 66: For u2 = 65 To 66: For w2 = 65 To 66   For v3 = 65 To 66: For u3 = 65 To 66: For w3 = 65 To 66   For v4 = 65 To 66: For u4 = 65 To 66: For w4 = 32 To 126               ActiveSheet.Unprotect Chr(v1) & Chr(u1) & Chr(w1) & _       Chr(v2) & Chr(u2) & Chr(v3) & Chr(u3) & Chr(w3) & _       Chr(v4) & Chr(u4) & Chr(w4) & Chr(w2)          Next: Next: Next: Next: Next: Next   Next: Next: Next: Next: Next: Next End Sub

    Read the article

  • Data model for unlockable card collection

    - by Karlos Zafra
    My game has a collection of cards (about 64) that are unlocked (one by one) provided that 4 items are collected/gained in the game. Right now the deck of cards is modeled with a plist file, like this: <plist> <dict> <key>card1</key> <dict> <key>available</key> <true/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>1</integer> </dict> <key>card2</key> <dict> <key>available</key> <false/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>2</integer> </dict> The value for the key named "available" marks if the item is locked or not. Right now I have to include the data for the items collected by the user, but including that inside the plist looks like too much hassle. What kind of structure should be more suitable for this? How do you manage a collection of unlockable items like this?

    Read the article

  • python sqlite3 won't execute a join, but sqlite3 alone will

    - by Francis Davey
    Using the sqlite3 standard library in python 2.6.4, the following query works fine on sqlite3 command line: select segmentid, node_t, start, number,title from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) left outer join numbers using (start, legid, version); But If I execute it via the sqlite3 library in python I get an error: >>> conn=sqlite3.connect('data/test.db') >>> conn.execute('''select segmentid, node_t, start, number,title from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) left outer join numbers using (start, legid, version)''') Traceback (most recent call last): File "<stdin>", line 1, in <module> sqlite3.OperationalError: cannot join using column start - column not present in both tables The (computed) table on the left hand side of the join appears to have the relevant column because if I check it by itself I get: >>> conn.execute('''select * from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) limit 20''').description (('segmentid', None, None, None, None, None, None), ('html', None, None, None, None, None, None), ('node_t', None, None, None, None, None, None), ('legid', None, None, None, None, None, None), ('version', None, None, None, None, None, None), ('start', None, None, None, None, None, None), ('title', None, None, None, None, None, None)) My schema is: CREATE TABLE leg (legid integer primary key, t char(16), year char(16), no char(16)); CREATE TABLE numbers ( number char(16), legid integer, version integer, start integer, end integer, prev integer, prev_number char(16), next integer, next_number char(16), primary key (number, legid, version)); CREATE TABLE position ( segmentid integer, legid integer, version integer, start integer, primary key (segmentid, legid, version)); CREATE TABLE 'segments' (segmentid integer primary key, html text, node_t integer); CREATE TABLE titles (legid integer, segmentid integer, title text, primary key (legid, segmentid)); CREATE TABLE versions (legid integer, version integer, primary key (legid, version)); CREATE INDEX idx_numbers_start on numbers (legid, version, start); I am baffled as to what I am doing wrong. I have tried quitting/restarting both the python and sqlite command lines and can't see what I'm doing wrong. It may be completely obvious.

    Read the article

  • Why can't the compiler/JVM just make autoboxing "just work"?

    - by Pyrolistical
    Autoboxing is rather scary. While I fully understand the difference between == and .equals I can't but help have the follow bug the hell out of me: final List<Integer> foo = Arrays.asList(1, 1000); final List<Integer> bar = Arrays.asList(1, 1000); System.out.println(foo.get(0) == bar.get(0)); System.out.println(foo.get(1) == bar.get(1)); That prints true false Why did they do it this way? It something to do with cached Integers, but if that is the case why don't they just cache all Integers used by the program? Or why doesn't the JVM always auto unbox to primitive? Printing false false or true true would have been way better. EDIT I disagree about breakage of old code. By having foo.get(0) == bar.get(0) return true you already broke the code. Can't this be solved at the compiler level by replacing Integer with int in byte code (as long as it is never assigned null)

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >