Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 258/330 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • What is it in the CSS/DOM that prevents an input box with display: block from expanding to the size of its container

    - by Steven Xu
    Sample HTML/CSS: <div class="container"> <input type="text" /> <div class="filler"></div> </div> div.container { padding: 5px; border: 1px solid black; background-color: gray; } div.filler { background-color: red; height: 5px; } input { display: block; } http://jsfiddle.net/bPEkb/3/ Question Why doesn't the input box expand to have the same outer width as, say div.filler? That is to say, why doesn't the input box expand to fit its container like other block elements with width: auto; do? I tried checking the "User Agent CSS" in Firebug to see if I could come up with something there. No luck. I couldn't find any specific differences in CSS that I could specifically link to the input box behaving differently from the regular div.filler. Besides curiousity, I'd like to know why this is to get to the bottom of it to figure out a way to set width once and forget it. My current practice of explicitly setting the width of both input and its containing block element seems redundant and less than modular. While I'm familiar with the technique of wrapping the input element in a div then assigning to the input element negative margins, this seems quite undesirable.

    Read the article

  • How can I intelligently group rows of integers for a faceted search?

    - by Alastair
    I'm not even quite sure what terms I should be using for what I want, so any advice on what I'm even asking for would be very welcome. Basically, my web site lists user-generated accommodations. Each has a rent price, which users will be able to query in our new faceted search box. Users search by city, and within each city I'd like to present a different rent grouping. That is to say that in City #1, if we have listings ranging from $200 - $1000, I'd like to present checkboxes for: less than $300 $301 - $500 $501 - $700 more than $700 However, if City #2 has values that range from $500 - $1500, I want the ranges above to change accordingly. So, if I say that I want 5 or 6 range options in each city, I think I have two options: Take the min and max values and just split the difference. I don't like this idea because one listing with a rent of $10,000 will throw the whole scale off. Intelligently calculate the ranges using means, medians etc. Number 2 is what I need help with. I'm a web developer that gets logic, but was never strong on math and statistics at school. Can anyone point me towards a guide that'll help me figure this out?

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • Java - Calling all methods of a class

    - by Thomas Eschemann
    I'm currently working on an application that has to render several Freemarker templates. So far I have a Generator class that handles the rendering. The class looks more or less like this: public class Generator { public static void generate(…) { renderTemplate1(); renderTemplate2(); renderTemplate3(); } private static void render(…) { // renders the template } private static void renderTemplate1() { // Create config object for the rendering // and calls render(); }; private static void renderTemplate1() { // Create config object for the rendering // and calls render(); }; … } This works, but it doesn't really feel right. What I would like to do is create a class that holds all the renderTemplate...() methods and then call them dynamically from my Generator class. This would make it cleaner and easier to extend. I was thinking about using something like reflection, but it doesn't really feel like a good solution either. Any idea on how to implement this properly ?

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Element Content Versus Attribute for Simple XML Value

    - by MB
    I know the elements versus attributes debate has come up many times here and elsewhere (e.g. here, here, here, here, and here) but I haven't seen much discussion of elements versus attributes for simple property values. So which of the following approaches do you think is better for storing a simple value? A: Value in Element Content: <TotalCount>553</TotalCount> <CelsiusTemperature>23.5</CelsiusTemperature> <SingleDayPeriod>2010-05-29</SingleDayPeriod> <ZipCodeLocation>12203</ZipCodeLocation> or B: Value in Attribute: <TotalCount value="553"/> <CelsiusTemperature value="23.5"/> <SingleDayPeriod day="2010-05-29"/> <ZipCodeLocation code="12203"/> I suspect that putting the value in the element content (A) might look a little more familiar to most folks (though I'm not sure about that). Putting the value in an attribute (B) might use less characters, but that depends on the length of the element and attribute names. Putting the value in an attribute (B) might be more extensible, because you could potentially include all sorts of extra information as nested elements. Whereas, by putting the value inside the element content (A), you're restricting extensibility to adding more attributes. But then extensibility often isn't a concern for really simple properties - sometimes you know that you'll never need to add additional data. Bottom line might be that it simply doesn't matter, but it would still be great to hear some thoughts and see some votes for the two options.

    Read the article

  • How to reliably identify users across Internet?

    - by amn
    I know this is a big one. In fact, it may be used for some SO community wiki. Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior. I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access. So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it? MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal. Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ. Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?

    Read the article

  • Question about the benefit of using an ORM

    - by johnny
    I want to use an ORM for learning purposes and am try nhibernate. I am using the tutorial and then I have a real project. I can go the "old way" or use an ORM. I'm not sure I totally understand the benefit. On the one hand I can create my abstractions in code such that I can change my databases and be database independent. On the other it seems that if I actually change the database columns I have to change all my code. Why wouldn't I have my application without the ORM, change my database and change my code, instead of changing my database, orm, and code? Is it that they database structure doesn't change that much? I believe there are real benefits because ORMs are used by so many. I'm just not sure I get it yet. Thank you. EDIT: In the tutorial they have many files that are used to make the ORM work http://www.hibernate.org/362.html In the event of an application change, it seems like a lot of extra work just to say that I have "proper" abstraction layers. Because I'm new at it it doesn't look that easy to maintain and again seems like extra work, not less.

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • Why is the main method not covered?

    - by Mike.Huang
    main method: public static void main(String[] args) throws Exception { if (args.length != EXPECTED_NUMBER_OF_ARGUMENTS) { System.err.println("Usage - java XFRCompiler ConfigXML PackageXML XFR"); } String configXML = args[0]; String packageXML = args[1]; String xfr = args[2]; AutoConfigCompiler compiler = new AutoConfigCompiler(); compiler.setConfigDocument(loadDocument(configXML)); compiler.setPackageInfoDoc(loadDocument(packageXML)); // compiler.setVisiblityDoc(loadDocument("VisibilityFilter.xml")); compiler.compileModel(xfr); } private static Document loadDocument(String fileName) throws Exception { TXDOMParser parser = (TXDOMParser) ParserFactory.makeParser(TXDOMParser.class.getName()); InputSource source = new InputSource(new FileInputStream(fileName)); parser.parse(source); return parser.getDocument(); } testcase: @Test public void testCompileModel() throws Exception { // construct parameters URL configFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Config.xml"); URL packageFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Package.xml"); File tmpFile = new File("Ford_2008_Mustang_tmp.xfr"); if(!tmpFile.exists()) { tmpFile.createNewFile(); } String[] args = new String[]{configFile.getPath(),packageFile.getPath(),tmpFile.getPath()}; try { // test main method XFRCompiler.main(args); } catch (Exception e) { assertTrue(true); } try { // test args length is less than 3 XFRCompiler.main(new String[]{"",""}); } catch (Exception e) { assertTrue(true); } tmpFile.delete(); } Coverage outputs displayed as the lines from String configXML = args[0]; in main method are not covered.

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • question about counting sort

    - by davit-datuashvili
    hi i have write following code which prints elements in sorted order only one big problem is that it use two additional array here is my code public class occurance{ public static final int n=5; public static void main(String[]args){ // n is maximum possible value what it should be in array suppose n=5 then array may be int a[]=new int[]{3,4,4,2,1,3,5};// as u see all elements are less or equal to n //create array a.length*n int b[]=new int[a.length*n]; int c[]=new int[b.length]; for (int i=0;i<b.length;i++){ b[i]=0; c[i]=0; } for (int i=0;i<a.length;i++){ if (b[a[i]]==1){ c[a[i]]=1; } else{ b[a[i]]=1; } } for (int i=0;i<b.length;i++){ if (b[i]==1) { System.out.println(i); } if (c[i]==1){ System.out.println(i); } } } } // 1 2 3 3 4 4 5 1.i have two question what is complexity of this algorithm?i mean running time 2. how put this elements into other array with sorted order? thanks

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • Host WCF in MVC2 Site

    - by Basiclife
    Hi, We've got a very large, complex MVC2 website. We want to add an API for some internal tools and decided to use WCF. Ideally, we want MVC itself to host the WCF service. Reasons include: Although there's multiple tiers to the application, some functionality we'd like in the API requires the website itself (e.g. formatting emails). We use TFS to auto-build (continuous integration) and deploy - The less we need to modify the build and release mechanism the better We use the Unity container and Inversion of Control throughout the application. Being part of the Website would allow us to re-use configuration classes and other helper methods. I've written a custom ServiceBehavior which in turn has a custom InstanceProvider - This allows me to instantiate and configure a container which is then used to service all requests for class instances from WCF. So my question is; Is it possible to host a WCF service from within MVC itself? I've only had experience in Services / Standard Asp.Net websites before and didn't realise MVC2 might be different until I actually tried to wire it into the config and nothing happened. After some googling, there don't seem to be many references to doing this - so thought I'd ask here.

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • If I use Unicode on a ISO-8859-1 site, how will that be interpreted by a browser?

    - by grg-n-sox
    So I got a site that uses ISO-8859-1 encoding and I can't change that. I want to be sure that the content I enter into the web app on the site gets parsed correctly. The parser works on a character by character basis. I also cannot change the parser, I am just writing files for it to handle. The content in my file I am telling the app to display after parsing contains Unicode characters (or at least I assume so, even if they were produced by Windows Alt Codes mapped to CP437). Using entities is not an option due to the character by character operation of the parser. The only characters that the parser escapes upon output are markup sensitive ones like ampersand, less than, and greater than symbols. I would just go ahead and put this through to see what it looks like, but output can only be seen on a publishing, which has to spend a couple days getting approved and such, and that would be asking too much for just a test case. So, long story short, if I told a site to output ?ÇÑ¥?? on a site with a meta tag stating it is supposed to use ISO-8859-1, will a browser auto-detect the Unicode and display it or will it literally translate it as ISO-8859-1 and get a different set of characters?

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Commenting out portions of code in Scala

    - by akauppi
    I am looking for a C(++) #if 0 -like way of being able to comment out whole pieces of Scala source code, for keeping around experimental or expired code for a while. I tried out a couple of alternatives and would like to hear what you use, and if you have come up with something better? // Simply block-marking N lines by '//' is one way... // <tags> """ anything My editor makes this easy, but it's not really The Thing. It gets easily mixed with actual one-line comments. Then I figured there's native XML support, so: <!-- ... did not work --> Wrapping in XML works, unless you have <tags> within the block: class none { val a= <ignore> ... cannot have //<tags> <here> (not even in end-of-line comments!) </ignore> } The same for multi-line strings seems kind of best, but there's an awful lot of boilerplate (not fashionable in Scala) to please the compiler (less if you're doing this within a class or an object): object none { val ignore= """ This seems like ... <truly> <anything goes> but three "'s of course """ } The 'right' way to do this might be: /*** /* ... works but not properly syntax highlighed in SubEthaEdit (or StackOverflow) */ ***/ ..but that matches the /* and */ only, not i.e. /*** to ***/. This means the comments within the block need to be balanced. And - the current Scala syntax highlighting mode for SubEthaEdit fails miserably on this. As a comparison, Lua has --[==[ matching ]==] and so forth. I think I'm spoilt? So - is there some useful trick I'm overseeing?

    Read the article

  • Anyone NOT using a Web Framework? Why?

    - by tom
    I'm well aware of the many reasons to use a web framework. I'm just wondering whether anyone out there is using absolutely no web framework whatsoever to develop their web projects. I would really love to know the reason(s) why you're not using a web framework. For the sake of this discussion, your programming language of choice does not matter. Some possibilities for discussion: You don't hide behind an ORM. You don't rely on any sort of templating system. You think MVC is a really nice TLA but lacks an essential vowel or two. No need for any additional javascript framework tomfoolery. You just write as much code as possible in your native programming language(s). Summary of reasons thus far: Language learning opportunities. Specific performance reasons (write-intensive transaction processing). Seeking more nuanced control over your data and applications (less abstraction). You're building your own framework! Prove to yourself that you can succeed (or fail) just like the big framework-building gurus. Integration issues with unpopular/legacy technologies (exotic databases or protocols come to mind). Big company, lots of code, no talent nor buy-in present to move to a web framework. Some frameworks really lock you in and cannot perpetually grow along with your needs. These few black sheep don't make it easy to jump outside of the framework, write some custom code, and easily jump back in. When you finally escape the asylum, you'll never look back.

    Read the article

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • gevent urllib is slow

    - by djay
    I've created a set of demos of a TCP server however my gevent examples are noticely slower. I'm sure must be how I compiled gevent but can't work out the problem. I'm using OSX leopard using fink compiled python 2.6 and 2.7. I've tried both the stable gevent and gevent 1.0b1 and it acts the same. The echo takes 5 seconds to respond, where the other examples take <1sec. If I remove the urllib call then the problem goes away. I put all the code in https://github.com/djay/geventechodemo To run the examples I'm using zc.buildout so to build $ python2.7 bootstrap.py $ bin/buildout To run the gevent example: $ bin/py geventecho3.py & [1] 80790 waiting for connection... $ telnet localhost 8080 Trying 127.0.0.1... ...connected from: ('127.0.0.1', 56588) Connected to localhost. Escape character is '^]'. hello echo: avast This will take 3-4 seconds to respond on my system. However the twisted example $ bin/py threadecho2.py or the twisted example $ bin/py twistedecho2.py Is less than 1s. Any idea what I'm doing wrong?

    Read the article

  • how does Cocoa compare to Microsoft, Qt?

    - by Paperflyer
    I have done a few months of development with Qt (built GUI programatically only) and am now starting to work with Cocoa. I have to say, I love Cocoa. A lot of the things that seemed hard in Qt are easy with Cocoa. Obj-C seems to be far less complex than C++. This is probably just me, so: Ho do you feel about this? How does Cocoa compare to WPF (is that the right framework?) to Qt? How does Obj-C compare to C# to C++? How does XCode/Interface Builder compare to Visual Studio to Qt Creator? How do the Documentations compare? For example, I find Cocoa's Outlets/Actions far more useful than Qt's Signals and Slots because they actually seem to cover most GUI interactions while I had to work around Signals/Slots half the time. (Did I just use them wrong?) Also, the standard templates of XCode give me copy/paste, undo/redo, save/open and a lot of other stuff practically for free while these were rather complex tasks in Qt. Please only answer if you have actual knowledge of at least two of these development environments/frameworks/languages.

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >