Search Results

Search found 680 results on 28 pages for 'precision'.

Page 18/28 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Can you get access to the NumberFormatter used by ICU MessageFormat

    - by Travis
    This may be a niche question but I'm working with ICU to format currency strings. I've bumped into a situation that I don't quite understand. When using the MesssageFormat class, is it possible to get access to the NumberFormat object it uses to format currency strings. When you create a NumberFormat instance yourself, you can specify attributes like precision and rounding used when creating currency strings. I have an issue where for the South Korean locale ("ko_KR"), the MessageFormat class seems to create currency strings w/ rounding (100.50 - ?100). In areas where I use NumberFormat directly, I set setMaximumFractionDigits and setMinimumFractionDigits to 2 but I can't seem to set this in the MessageFormat. Any ideas?

    Read the article

  • Save matrix of double values in OpenCV

    - by Christian
    I have an OpenCV matrix of double (CV_32F) values. I'd like to save it to the disk. I know, I could convert it to an 1-Channel 8-bit IplImage and save it. But that way, I loose precision. Is there a way to save it directly in the 32-bit format, without having to convert it first? It also would be nice, if the resulting file would have an image format, so I can view the result as an image.

    Read the article

  • GWT's JSONParser producing incorrect values for numbers.

    - by WesleyJohnson
    I'm work with GWT and parsing a JSON result from an ASP.NET webservice method that returns a DataTable. I can parse the result into a JSONvalue/JSONObject just fine. The issue I'm having is that one my columns in a DECIMAL(20, 0) and the values that are getting parsed into JSON aren't exact. To demonstrate w/o the need for a WS call, in GWT I threw this together: String jsonString = "{value:4768428229311981600}"; JSONObject jsonObject = JSONParser.parse( jsonString ).isObject(); Window.alert( jsonObject.toString() ); This in turn alerts: {"value":4768428229311982000} I'm under the understanding that GWT's JSONParser is just using eval() to do the parsing, so is this some sort of number/precision issue with JavaScript that I've never been aware of. I'll admit I don't work with numbers that much in JavaScript and I might be able to work around this by changing the .NET WebService to return this column as string, but I'd really rather not do that. Thanks for any help.

    Read the article

  • In OpenGL vertex shader, gl_Position doesn't get homogenized..

    - by KJ
    Hi everyone, I was expecting gl_Position to automatically get homogenized (divided by w), but it seems not working.. Why do the followings make different results? 1) void main() { vec4 p; ... omitted ... gl_Position = projectionMatrix * p; } 2) ... same as above ... p = projectionMatrix * p; gl_Position = p / p.w; I think the two are supposed to generate the same results, but it seems it's not the case. 1 doesn't work while 2 is working as expected.. Could it possibly be a precision problem? Am I missing something? This is driving me almost crazy.. helps needed. Many thanks in advance!

    Read the article

  • C++ long long manipulation

    - by Krakkos
    Given 2 32bit ints iMSB and iLSB int iMSB = 12345678; // Most Significant Bits of file size in Bytes int iLSB = 87654321; // Least Significant Bits of file size in Bytes the long long form would be... // Always positive so use 31 bts long long full_size = ((long long)iMSB << 31); full_size += (long long)(iLSB); Now.. I don't need that much precision (that exact number of bytes), so, how can I convert the file size to MiBytes to 3 decimal places and convert to a string... tried this... long double file_size_megs = file_size_bytes / (1024 * 1024); char strNumber[20]; sprintf(strNumber, "%ld", file_size_megs); ... but dosen't seem to work. i.e. 1234567899878Bytes = 1177375.698MiB ??

    Read the article

  • NamedParameterJdbcTemplate jconnect decimal issue

    - by user1052849
    I am using NamedParameterJdbcTemplate to insert data into a table. (Spring 2.5.3/Java 1.6) I am using jconnect driver to connect to sybase jdbc:sybase:Tds:<Server>:<Port>. For some reason the decimal values the decimal part is truncated. With the same code if I use jtds driver (jdbc:jtds:sybase://<Servername>:<Port>) its working fine. I cannot use jtds as jconn is being used by other code. In Java objects, field is defined as double. In database, field is defined as float (numeric with precision does not work). Any help is appreciated.

    Read the article

  • Java Future and infinite computation

    - by Chris
    I'm trying to optimize an (infinite) computation algorithm. I have an infinte Sum to calculate ( Summ_{n- infinity} (....) ) My idea was to create several threads using the Future < construct, then combine the intermediate results together. My problem hoewer is that I need a certain precision. So I need to constantly calculate the current result while other threads keep calculating. My question is: Is there some sort of result queue where each finished thread can put its results in, while a main thread can receive those results and then either lets the computation continues or terminate the whole ExecutorService? Any Help would really be appreciated! Thanks!

    Read the article

  • strod() and sprintf() inconsistency under GCC and MSVC

    - by Dmitry Sapelnikov
    I'm working on a cross-platform app for Windows and Mac OS X, and I have a problem with two standard C library functions: strtod() (string-to-double conversion) ? sprintf (when used for outputting double-precision floating point numbers) -- their GCC and MSVC versions return different results. I'm looking for a well-tested cross-platform open-source implementation of those functions, or just for a pair of functions that would correctly and consistently convert double to string and back. I've already tried the clib GCC implementation, but the code is too long and too dependent on other source files, so I expect the adaptation to be difficult. What implementations of string-to-double and double-to-string functions would you recommend?

    Read the article

  • sql server : get default value of a column

    - by luke
    Hello, I execute a select to get the structure of a table. I want to get info about the columns like its name or if it's null or if it's primary key.. I do something like this ....sys.columns c... c.precision, c.scale, c.is_nullable as isnullable, c.default_object_id as columndefault, c.is_computed as iscomputed, but for default value i get the id..something like 454545454 but i want to get the value "xxxx". What is the table to search or what is the function to convert that id to the value. Thanks

    Read the article

  • Problem with truncation of floating point values in DBSlayer.

    - by chrisdew
    When I run a query through DBslayer http://code.nytimes.com/projects/dbslayer the floating point results are truncated to a total of six digits (plus decimal point and negative sign when needed). { ... "lat":52.2228,"lng":-2.19906, ... } When I run the same query in MySQL, the results are as expected. | 52.22280884 | -2.19906425 | Firstly, am I correct in identifying DBSlayer as the cause of this effect? (Or the JSON library it uses, etc.) Secondly, is this floating point precision configurable within DBSlayer? Thanks, Chris. P.S. Ubuntu 9.10, x86_64 Path: . URL: http://dbslayer.googlecode.com/svn/trunk Repository Root: http://dbslayer.googlecode.com/svn Repository UUID: 5df2be84-4748-0410-afd4-f777a056bd0c Revision: 65 Node Kind: directory Schedule: normal Last Changed Author: dgottfrid Last Changed Rev: 65 Last Changed Date: 2008-03-28 22:52:46 +0000 (Fri, 28 Mar 2008)

    Read the article

  • Hibernate sequence should only generate when ID is <=0

    - by Tim Leys
    Hi all, I'm using the folowing sequence in my code. (I got the sequence and @Id @SequenceGenerator(name = "S912_PRO_SEQ", sequenceName = "S912_PRO_SEQ", allocationSize = 1) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "S912_PRO_SEQ") @Column(name = "PRO_ID", unique = true, nullable = false, precision = 9, scale = 0) public int getId() { return this.id; } And using the following sequence / triggers in my DB. CREATE SEQUENCE S912_PRO_SEQ nomaxvalue minvalue 20; CREATE OR REPLACE TRIGGER S912_PRO_B_I_TRG BEFORE INSERT ON S912_project REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW ENABLE begin IF :NEW.pro_ID IS NULL THEN select S912_PRO_SEQ.nextval into :new.pro_ID from dual; END IF; end; I was wondering if there is a way to let hibernate generate a sequence ONLY if the ID is <=0 (not set) or if the ID already exists. I know for most cases my trigger would fix the situation. But I do not want to rely completely on it. I hope someone can help me out :p

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Python - Number of Significant Digits in results of division

    - by russ
    Newbie here. I have the following code: myADC = 128 maxVoltage = 5.0 maxADC = 255.0 VoltsPerADC = maxVoltage/maxADC myVolts = myADC * VoltsPerADC print "myADC = {0: >3}".format(myADC) print "VoltsPerADC = {0: >7}".format(VoltsPerADC) print VoltsPerADC print "myVolts = {0: >7}".format(myVolts) print myVolts This outputs the following: myADC = 128 VoltsPerADC = 0.0196078 0.0196078431373 myVolts = 2.5098 2.50980392157 I have been searching for an explanation of how the number of significant digits is determined by default, but have had trouble locating an explanation that makes sense to me. This link link text suggests that by default the "print" statement prints numbers to 10 significant figures, but that does not seem to be the case in my results. How are the number of significant digits/precision determined? Can someone shed some light on this for me. Thanks in advance for your time and patience.

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Handling Datetime with decimal '2010-02-14 20:18:58.313000000'

    - by AaronLS
    In SQL Server I have some textual data in varchar fields I am trying to convert to datetime's. The funny thing is this data at some point was in a datetime field, exported to flat file, and now I am reimporting it. The problem is it is in this format 2010-02-14 20:18:58.313000000 and the conversion to datetime fails. I have no idea how it ended up like this when it was originally extracted from a datetime column. Basically a table was exported to a flat file by someone else. The original table was lost. I am reimporting back from the flatfile. I could just drop the decimal but this would be like throwing out some of the data. I'd like to maintain as much precision as possible. How can I import this data from the varchar column back into a datetime column and preserve as much accuracy as possible?

    Read the article

  • .NET Decimal = what in SQL?

    - by dmose
    What's the best data type in SQL to represent Decimal in .NET? We want to store decimal numbers with up to 9 decimal place precision and want to avoid rounding errors etc on the front end. Reading about data types, it appears using Decimal in .NET is the best option because you will not get rounding errors, although it is a bit slower than a Double. We want to carry this through down to the DB and want minimum conversion issues when moving data through the layers. Any suggestions?

    Read the article

  • Manipulating and comparing floating points in java

    - by Praneeth
    In Java the floating point arithmetic is not represented precisely. For example following snippet of code float a = 1.2; float b= 3.0; float c = a * b; if(c == 3.6){ System.out.println("c is 3.6"); } else { System.out.println("c is not 3.6"); } actually prints "c is not 3.6". I'm not interested in precision beyond 3 decimals (#.###). How can I deal with this problem to multiply floats and compare them reliably? Thanks much

    Read the article

  • Why does DataInputStream not support integers?

    - by Jason
    I need to read in a list of numbers from a file, none of which are larger than 32767. Originally I was going to use the Scanner class to pull in the data, then I read about DataInputStream. This would work well for me, except that according to the API, it supports all primitive variables EXCEPT ints! Listed are longs, shorts, bytes, chars, booleans, ect, but no ints. I have no need for double precision from the incoming data. Is this a deliberate or unintentional oversight?

    Read the article

  • Ruby on Rails - Currency : commas causing an issue.

    - by easement
    Looking on SO, I see that the preferred way to currency using RoR is using decimal(8,2) and to output them using number_to_currency(); I can get my numbers out of the DB, but I'm having issues on getting them in. Inside my update action I have the following line: if @non_labor_expense.update_attributes(params[:non_labor_expense]) puts YAML::dump(params) The dump of params shows the correct value. xx,yyy.zz , but what gets stored in the DB is only xx.00 What do I need to do in order to take into account that there may be commas and a user may not enter .zz (the cents). Some regex and for comma? how would you handle the decimal if it were .2 versus .20 . There has to be a builtin or at least a better way. My Migration (I don't know if this helps): class ChangeExpenseToDec < ActiveRecord::Migration def self.up change_column :non_labor_expenses, :amount, :decimal, :precision => 8, :scale => 2 end def self.down change_column :non_labor_expenses, :amount, :integer end end

    Read the article

  • Hibernate: same generated value in two properties

    - by Markos Fragkakis
    Hi, I have an entity A with fields: aId (the system id) bId I want the first to be generated: @Id @Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12, scale = 0) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN") @BusinessKey public Long getAId() { return this.aId; } I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId? Note that afterwards, I want to be able to update bId from my gui. If the solution is JPA, even better.

    Read the article

  • C++ fixed point library?

    - by uj2
    I am looking for a free C++ fixed point library (Mainly for use with embedded devices, not for arbitrary precision math). Basically, the requirements are: No unnecessary runtime overhead: whatever can be done at compile time, should be done at compile time. Ability to transparently switch code between fixed and floating point, with no inherent overhead. Fixed point math functions. There's no much point using fixed point if you need to cast back and forth in order to take a square root. Small footprint. Any suggestions?

    Read the article

  • Looking for solution to persist rows sequence in a database table that allow efficient reordering at

    - by Chau Chee Yang
    I have a database table. There is a field name "Sequence" that indicate sequence when user presents the rows in report or grid visually. When the rows are retrieved and presented in a grid, there are few UI gadget that allow user to reorder the rows arbitrary. The new sequence will be persist in database table when user commit the changes. Here are some UI operations: Move to first Move to last Move up 1 row Move down 1 row multi-select rows and move up or down multi-select rows and drag to new position For operation like "Move to first" or "Move to Last", it usually involve many rows and the sequence those rows would need to be updated accordingly. This may not be efficient enough at runtime. It is a common practice to use INTEGER as sequence's data type. Other solution is using "DOUBLE" or "FLOAT" that could overcome the mass update of row sequence but we will face problem if we reach the limit of precision.

    Read the article

  • Floating point equality and tolerances

    - by doron
    Comparing two floating point number by something like a_float == b_float is looking for trouble since a_float / 3.0 * 3.0 might not be equal to a_float due to round off error. What one normally does is something like fabs(a_float - b_float) < tol. How does one calculate tol? Ideally tolerance should be just larger than the value of one or two of the least significant figures. So if the single precision floating point number is use tol = 10E-6 should be about right. However this does not work well for the general case where a_float might be very small or might be very large. How does one calculate tol correctly for all general cases? I am interested in C or C++ cases specifically.

    Read the article

  • strtod() and sprintf() inconsistency under GCC and MSVC

    - by Dmitry Sapelnikov
    I'm working on a cross-platform app for Windows and Mac OS X, and I have a problem with two standard C library functions: strtod() - string-to-double conversion sprintf() - when used for outputting double-precision floating point numbers) Their GCC and MSVC versions return different results. I'm looking for a well-tested cross-platform open-source implementation of those functions, or just for a pair of functions that would correctly and consistently convert double to string and back. I've already tried the clib GCC implementation, but the code is too long and too dependent on other source files, so I expect the adaptation to be difficult. What implementations of string-to-double and double-to-string functions would you recommend?

    Read the article

  • Rails - Format number as currency format in the Getter

    - by daemonsy
    I am making a simple retail commerce solution, where there are prices in a few different models. These prices contribute to a total price. Imagine paying $0.30 more for selecting a topping for your yogurt. When I set the price field to t.decimal :price, precision:8, scale:2 The database stores 6.50 as 6.5. I know in the standard rails way, you call number_to_currency(price) to get the formatted value in the Views. I need to programmatically call the price field as well formatted string, i.e. $6.50 a few places that are not directly part of the View. Also, my needs are simple (no currency conversion etc), I prefer to have the price formatted universally in the model without repeated calling number_to_currency in views. Is there a good way I can modify my getter for price such that it always returns two decimal place with a dollar sign, i.e. $6.50 when it's called? Thanks in advance.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >