Search Results

Search found 379 results on 16 pages for 'floats'.

Page 12/16 | < Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >

  • What's the recommended implemenation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • How did the Lunar Lander example make the image backgrounds transparent?

    - by user279112
    Hello. I'm trying to make a GUI program with the Android SDK, using their Lunar Lander example as a significant self-teaching tool in the process. I've noticed their sprites' images' backgrounds, which were at least usually pure white, did not show up in their program. I want to ask how they did that, since their site doesn't explain simple things very well. I've managed to pull that off before on another GUI SDK, wherein all I had to do was to call a function and pass it a few floats to define a certain color, and until my code told it to do otherwise, that function would make sure that that particular color in my sprites' images was totally transparent. However I've wrestled with the Lunar Lander example and getting my own program to show some custom graphics for a week or two now, and I haven't noticed any such function call in the Lunar Lander example. I tried to look for it, but I did not find anything. I've tried to Google some tutorial or other reference material, but what I've found so far is just straying off into unrelated areas and totally dodging this EXTREMELY important lesson on the SDK's basics. Any ideas? Thanks!

    Read the article

  • Struggling with currency in Cocoa.

    - by Meltemi
    I'm trying to do something I'd think would be fairly simple: Let a user input a dollar amount, store that amount in an NSNumber (NSDecimalNumber?), then display that amount formatted as currency again at some later time. My trouble is not so much with the setNumberStyle:NSNumberFormatterCurrencyStyle and displaying floats as currency. The trouble is more with how said numberFormatter works with this UITextField. I can find few examples. This thread from November and this one give me some ideas but leaves me with more questions. I am using the UIKeyboardTypeNumberPad keyboard and understand that I should probably show $0.00 (or whatever local currency format is) in the field upon display then as a user enters numerals to shift the decimal place along: Begin with display $0.00 Tap 2 key: display $0.02 Tap 5 key: display $0.25 Tap 4 key: display $2.54 Tap 3 key: display $25.43 Then [numberFormatter numberFromString:textField.text] should give me a value I can store in my NSNumber variable. Sadly I'm still struggling: Is this really the best/easiest way? If so then maybe someone can help me with the implementation? I feel UITextField may need a delegate responding to every keypress but not sure what, where and how to implement it?! Any sample code? I'd greatly appreciate it! I've searched high and low... Edit1: So I'm looking into NSFormatter's stringForObjectValue: and the closest thing I can find to what benzado recommends: UITextViewTextDidChangeNotification. Having really tough time finding sample code on either of them...so let me know if you know where to look?

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • python win32com EXCEL data input error

    - by Rafal
    Welcome, I'm exporting results of my script into Excel spreadsheet. Everything works fine, I put big sets of data into SpreadSheet, but sometimes an error occurs: File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 550, in __setattr__ self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) pywintypes.com_error: (-2147352567, 'Exception.', (0, None, None, None, 0, -2146777998), None)*** I suppose It's not a problem of input data format. I put several different types of data strings, ints, floats, lists and it works fine. When I run the sript for the second time it works fine - no error. What's going on? PS. This is code that generates error, what's strange is that the error doesn't occur always. Say 30% of runs results in an error. : import win32com.client def Generate_Excel_Report(): Excel=win32com.client.Dispatch("Excel.Application") Excel.Workbooks.Add(1) Cells=Excel.ActiveWorkBook.ActiveSheet.Cells for i in range(100): Row=int(35+i) for j in range(10): Cells(int(Row),int(5+j)).Value="string" for i in range(100): Row=int(135+i) for j in range(10): Cells(int(Row),int(5+j)).Value=32.32 #float Generate_Excel_Report() The strangest for me is that when I run the script with the same code, the same input many times, then sometimes an error occurs, sometimes not. Thanks in advance for any help

    Read the article

  • Fluid CSS: float column with overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've played around with a bare-minimum scenario to reproduce the problem and noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post text which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode.

    Read the article

  • Variable-width inline underline effects in CSS

    - by sidereal
    I need to simulate the look of a typical paper form in CSS. It consists of a two-column table of fields. Each field consists of a field name (of variable width) followed by an underline that continues to the end of the column. The field might be populated, in which case there is some text centered above the line, or it may be blank. If that isn't clear, he's a rough idea in manky ASCII art: Name: _______Foo_______ Age: _____17______ Location: __Melbourne__ Handedness: _Left_ (except that the underline would continue under any text) To implement the underline without text, I assume I should use a border-bottom rather than a text-decoration: underline. Additionally, I need the bordered element to take up the full available space. Both of those argue for a block-level element. However, I can't find any way to get the block level element (either a div, an li, or a span set to display: block or inline-block) to remain on the same line as the label. As soon as I give it a width: 100%, it newlines. I've tried various combinations of floats, and I'm not inclined to do anything ridiculous with absolute positioning. Any recommendations?

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • IE8 developer tools missing some styles

    - by Craig Warren
    Hi, I'm having some problems with some CSS properties in IE8. I've tested my site in IE7, Chrome and Firefox and they work fine but IE8 is having some layout issues. I inspect the developer tool option on ie8 and I've noticed that some of the properties I set in CSS are being ignored by ie8. For example: #header { position: relative; padding: 20px; height: 100px; background:url(header.png); } In this header IE8 ignored the height property: If I inspect the element in developer tools it is missing that property and it's crushed into another line: background:url;HEIGHT: 100PX The same thing happens for floats too: #logon { float: left; text-align:right; width:20%; height: 40px; padding-left: 0px; padding-right:7px; border:0; margin:0; background: url(navgradient.gif); } This ignores the float value: background: url(navgradient.gif); FLOAT:left; What is happening here and how can I fix it?

    Read the article

  • AudioFileWriteBytes fails with error code -40

    - by alexbw
    I'm trying to write raw audio bytes to a file using AudioFileWriteBytes(). Here's what I'm doing: void writeSingleChannelRingBufferDataToFileAsSInt16(AudioFileID audioFileID, AudioConverterRef audioConverter, ringBuffer *rb, SInt16 *holdingBuffer) { // First, figure out which bits of audio we'll be // writing to file from the ring buffer UInt32 lastFreshSample = rb->lastWrittenIndex; OSStatus status; int numSamplesToWrite; UInt32 numBytesToWrite; if (lastFreshSample < rb->lastReadIndex) { numSamplesToWrite = kNumPointsInWave + lastFreshSample - rb->lastReadIndex - 1; } else { numSamplesToWrite = lastFreshSample - rb->lastReadIndex; } numBytesToWrite = numSamplesToWrite*sizeof(SInt16); Then we copy the audio data (stored as floats) to a holding buffer (SInt16) that will be written directly to the file. The copying looks funky because it's from a ring buffer. UInt32 buffLen = rb->sizeOfBuffer - 1; for (int i=0; i < numSamplesToWrite; ++i) { holdingBuffer[i] = rb->data[(i + rb->lastReadIndex) & buffLen]; } Okay, now we actually try to write the audio from the SInt16 buffer "holdingBuffer" to the audio file. The NSLog will spit out an error -40, but also claims that it's writing bytes. No data is written to file. status = AudioFileWriteBytes(audioFileID, NO, 0, &numBytesToWrite, &holdingBuffer); rb->lastReadIndex = lastFreshSample; NSLog(@"Error = %d, wrote %d bytes", status, numBytesToWrite); return;

    Read the article

  • How to automatically expand html select element in javascript

    - by xan
    I have a (hidden) html select object in my menu attached to a menu button link, so that clicking the link shows the list so you can pick from it. When you click the button, it calls some javascript to show the <select>. Clicking away from the <select> hides the list. What I really want is to make the <select> appear fully expanded, as if you had clicked on the "down" arrow, but I can't get this working. I've tried lots of different approaches, but can't make any headway. What I'm doing currently is this: <li> <a href="javascript:showlist();"><img src="/images/icons/add.png"/>Add favourite</a> <select id="list" style="display:none; onblur="javascript:cancellist()"> </select> </li> // in code function showlist() { //using prototype not jQuery $('list').show(); // shows the select list $('list').focus(); // sets focus so that when you click away it calles onblur() } I've tried calling $('list').click(). I've tried setting onfocus="this.click()" But in both cases I'm getting Uncaught TypeError: Object # has no method 'click' which is peculiar as link text says that it supports the standard functions. I've tried setting the .size = .length which works, but doesn't have the same appearance (as when you click to open the element, it floats over the rest of the page.) Does anyone have any suggestions?

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • object_getInstanceVariable works for float, int, bool, but not for double?

    - by Russel West
    I've got object_getInstanceVariable to work as here however it seems to only work for floats, bools and ints not doubles. I do suspect I'm doing something wrong but I've been going in circles with this. float myFloatValue; float someFloat = 2.123f; object_getInstanceVariable(self, "someFloat", (void*)&myFloatValue); works, and myFloatValue = 2.123 but when I try double myDoubleValue; double someDouble = 2.123f; object_getInstanceVariable(self, "someDouble", (void*)&myDoubleValue); i get myDoubleValue = 0. If I try to set myDoubleValue before the function eg. double myDoubleValue = 1.2f, the value is unchanged when I read it after the object_getInstanceVariable call. setting myIntValue to some other value before the getinstancevar function above returns 2 as it should, ie. it has been changed. then I tried Ivar tmpIvar = object_getInstanceVariable(self, "someDouble", (void*)&myDoubleValue); if i do ivar_getName(tmpIvar) i get "someDouble", but myDoubuleValue = 0 still! then i try ivar_getTypeEncoding(tmpIvar) and i get "d" as it should be. So to summarize, if typeEncoding = float, it works, if it is a double, the result is not set but it correctly reads the variable and the return value (Ivar) is also correct. I must be doing something basic wrong that I cant see so I'd appreciate if someone could point it out.

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • CSS Tables & min-width container?

    - by neezer
    <div id="wrapper"> <div id="header">...</div> <div id="main"> <div id="content">...</div> <div id="sidebar">...</div> </div> </div> #wrapper { min-width: 900px; } #main { display: table-row; } #content { display: table-cell; } #sidebar { display: table-cell; width: 250px; } The problem is that the sidebar isn't always at the right-most part of the page (depending on the width of #content). As #content's width is variable (depending on the width of the window), how to I make it so that the sidebar is always at the right-most part of its parent? Ex. Here's what I have now: <--- variable window width ----> --------------------------------- | (header) | --------------------------------- [content] | [sidebar] | | | | | | | | | | | | | And here's what I want: <--- variable window width ----> --------------------------------- | (header) | --------------------------------- [content] | [sidebar] | | | | | | | | | | | | | Please let me know if you need anymore information to help me with this issue. Thanks! PS - I know I can accomplish this easily with floats. I'm looking for a solution that uses CSS tables.

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • CSS layout - Aligning two divs side by side

    - by Ronnie
    Hello, I have a small problem. I am trying to align two divs side by side using CSS, however, I would like the center div to be positioned horizontally central in the page, I achieved this by using: #page-wrap { margin 0 auto; } Thats worked fine. The second div I would like positioned to the left side of the central page wrap but I can't manage to do this using floats although I'm sure it is possible. Maybe its best to show the example of what I am describing: I would like to push the red div up alongside the white div. Here is my current CSS concerning these two divs, sidebar being the red div and page-wrap being the white div: #sidebar { width: 200px; height: 400px; background: red; float: left; } #page-wrap { margin: 0 auto; width: 600px; background: #ffffff; height: 400px; } Any help would be appreciated.

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • How large a role does subjectiveness play in programming?

    - by Bob
    I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP. Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing. What role does subjectiveness play within the programming realm? Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting. The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance. But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought. It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work. Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • How to contain the Deepwater Horizon oil spill? [closed]

    - by Yarin
    This is obviously not programming, but it's important and we're smart people, so let's give it a shot. (BP has actually begun soliciting suggestions for how to deal with the crisis http://www.deepwaterhorizonresponse.com/go/doc/2931/546759/, confirming that they don't have a clue) I'll start with my own proposal... Anchored Chute: A large-diameter, collapsible, flexible tube/hose with a wide mouth on one end is anchored over the leak. There's no need for a hermetic seal, the opening just needs to be big enough to form a canopy over the leak area. The rest of the tubing can just be dumped on the sea floor. Since oil is denser than water, the oily water that flows into the mouth eventually inflates the tube and raises the opposite end to the surface, where it can be collected (Like those inflatable dancing air socks at car dealerships). Further buoyancy could be added with floats attached to the tube at intervals. I think this method would not be as susceptible to the problems BP had with the containment dome, where a rigid, metal casing froze up with crystallized hydrates, as we would not be trying to contain the full pressure of the well, but would be using the natural buoyancy of the oil to channel its flow, and with a much larger opening.

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • heterogeneous comparisons in python3

    - by Matt Anderson
    I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >