Search Results

Search found 6030 results on 242 pages for 'quick poll'.

Page 198/242 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Web development: Haskell or Scheme

    - by Robert E. Lester
    I would like to to choose one of these languages for building web applications. I'm not interested in framework per say, but have the following needs: Rapid development. Easy to scale. Strong community for the web. Quick and easy to deploy. I'm very familiar with Haskell, and have some familiarity with scheme (in particular PLT). Scheme appeals to me as good candidate for web development due to it's simple syntax which is homogenous across libraries. I state this despite my subjective opinion that Haskell is a 'cleaner' language. Haskell web apps seem to require learning and building a patchwork of different combinator libraries. On the plus side, I realise this can be quite expressive, although I'd prefer to eliminate impedance mismatches where possible. While scheme-plt looks to be a good fit, I can find but one example of it being used in the "real world". Haskell doesn't seem to fair too much better here, but there seems to be a bigger community behind the web side. Please help me make up my mind. For the most part I'm interested in real-world use cases.

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • If I do "jquery sortable" on a contenteditable item(s), I then can't focus mouse anywhere inside con

    - by Matej
    Strangely this is broken only in Firefox and Opera (IE, Chrome and Safari works as it should). Any suggestion for a quick fix? <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.2/jquery-ui.min.js" type="text/javascript"></script> <script> $(document).ready(function(){ $('#sortable').sortable(); }); </script> </head> <body> <span id="sortable"> <p contenteditable="true">One apple</p> <p>Two pears</p> <p>Three oranges</p> </span> </body> </html>

    Read the article

  • Capturing Drupal7 DOM content before page load for comparison

    - by ehime
    We have an MU (Multisite) installation of Drupal7 here at work, and are trying to temporarily hold back the swarm of bots we receive until we get a chance to load our content. I wrote a quick and and dirty script to send 503 headers if we find a certain criteria in Xpath (This can ALSO be done as a strpos/preg_match if DOM is not formed). In order to get the ball rolling though I need to figure out how to either A) Hijack the Drupal7 bootstrap and pull all content through this filter below B) ob_flush content through the filter before content is loaded The issue that I am having is figuring out exactly where I can catch the content at? I thought that index.php in Drupal7 would be the suspect, but I'm a little confused as to where or how I should capture the contents. Here's the script, and hopefully someone can point me in the right direction. //error_reporting(-1); /* start query */ $dom = new DOMDocument; $dom->preserveWhiteSpace = false; $dom->Load($_SERVER['PHP_SELF']); $xpath = new DOMXPath($dom); //if this exists we aren't ready to be read by bots $query = $xpath->query(".//*[@id='block-views-about-this-site-block']/div/div/div"); //or $query = 'klat-badge'; //if this is a string not DOM /* end query */ if(strpos($query) !== false) { //require banlist require('botlist.php'); $str = strtolower('/'.implode('|', array_unique($list)).'/i'); if(preg_match($str, strtolower($_SERVER['HTTP_USER_AGENT']))) { //so tell bots we're broken header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); exit; } }

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Sync Framework Considerations for Smart Client app

    - by DarkwingDuck
    Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality. I'm looking for some quick wins in relation to a sync project. The client app will be offline for a number of days. There will be a central server that MUST be SQL Server 2005. I can use .net 3.5. Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions. To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce. My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey. Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally. Hope my problem makes sense.

    Read the article

  • PHP: Next Available Value in an Array, starting with a non-indexed value

    - by Erik Smith
    I've been stumped on this PHP issue for about a day now. Basically, we have an array of hours formatted in 24-hour format, and an arbitrary value ($hour) (also a 24-hour). The problem is, we need to take $hour, and get the next available value in the array, starting with the value that immediately proceeds $hour. The array might look something like: $goodHours = array('8,9,10,11,12,19,20,21). Then the hour value might be: $hour = 14; So, we need some way to know that 19 is the next best time. Additionally, we might also need to get the second, third, or fourth (etc) available value. The issue seems to be that because 14 isn't a value in the array, there is not index to reference that would let us increment to the next value. To make things simpler, I've taken $goodHours and repeated the values several times just so I don't have to deal with going back to the start (maybe not the best way to do it, but a quick fix). I have a feeling this is something simple I'm missing, but I would be so appreciative if anyone could shed some light. Erik

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • jQuery modal dialog on ajaxStart event

    - by bdl
    I'm trying to use a jQuery UI modal dialog as a loading indicator via the ajaxStart, ajaxStop / ajaxComplete events. When the page fires, an Ajax handler loads some data, and the modal dialog shows just fine. However, it never hides or closes the dialog when the Ajax event is complete. It's a very small bit of code from the local server that is returned, so the actual Ajax event is very quick. Here's my actual code for the modal div: $("#modalwindow").dialog({ modal: true, height: 50, width: 200, zIndex: 999, resizable: false, title: "Please wait..." }) .bind("ajaxStart", function(){ $(this).show(); }) .bind("ajaxStop", function(){ $(this).hide(); }); The Ajax event is just a plain vanilla $.ajax({}) GET method call. Based on some searching here and Google, I've tried altering the ajaxStop handler to use $("#modalwindow").close(), $("#modalwindow").destroy(), etc. (#modalwindow referred to here as to give explicit context). I've also tried using the standard $("#modalwindow").dialog({}).ajaxStart(... as well. Should I be binding the events to a different object? Or calling them from within the $.ajax() complete event? I should mention, I'm testing on the latest IE8, FF 3.6 and Chrome. All have the same / effect.

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Large flags enumerations in C#

    - by LorenVS
    Hey everyone, got a quick question that I can't seem to find anything about... I'm working on a project that requires flag enumerations with a large number of flags (up to 40-ish), and I don't really feel like typing in the exact mask for each enumeration value: public enum MyEnumeration : ulong { Flag1 = 1, Flag2 = 2, Flag3 = 4, Flag4 = 8, Flag5 = 16, // ... Flag16 = 65536, Flag17 = 65536 * 2, Flag18 = 65536 * 4, Flag19 = 65536 * 8, // ... Flag32 = 65536 * 65536, Flag33 = 65536 * 65536 * 2 // right about here I start to get really pissed off } Moreover, I'm also hoping that there is an easy(ier) way for me to control the actual arrangement of bits on different endian machines, since these values will eventually be serialized over a network: public enum MyEnumeration : uint { Flag1 = 1, // BIG: 0x00000001, LITTLE:0x01000000 Flag2 = 2, // BIG: 0x00000002, LITTLE:0x02000000 Flag3 = 4, // BIG: 0x00000004, LITTLE:0x03000000 // ... Flag9 = 256, // BIG: 0x00000010, LITTLE:0x10000000 Flag10 = 512, // BIG: 0x00000011, LITTLE:0x11000000 Flag11 = 1024 // BIG: 0x00000012, LITTLE:0x12000000 } So, I'm kind of wondering if there is some cool way I can set my enumerations up like: public enum MyEnumeration : uint { Flag1 = flag(1), // BOTH: 0x80000000 Flag2 = flag(2), // BOTH: 0x40000000 Flag3 = flag(3), // BOTH: 0x20000000 // ... Flag9 = flag(9), // BOTH: 0x00800000 } What I've Tried: // this won't work because Math.Pow returns double // and because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Math.Pow(2, 0), Flag2 = Math.Pow(2, 1) } // this won't work because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Masks.MyCustomerBitmaskGeneratingFunction(0) } // this is my best solution so far, but is definitely // quite clunkie public struct EnumWrapper<TEnum> where TEnum { private BitVector32 vector; public bool this[TEnum index] { // returns whether the index-th bit is set in vector } // all sorts of overriding using TEnum as args } Just wondering if anyone has any cool ideas, thanks!

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Rails: Getting rid of "X is invalid" validation errors

    - by DJTripleThreat
    I have a sign-up form that has nested associations/attributes whatever you want to call them. My Hierarchy is this: class User < ActiveRecord::Base acts_as_authentic belongs_to :user_role, :polymorphic => true end class Customer < ActiveRecord::Base has_one :user, :as => :user_role, :dependent => :destroy accepts_nested_attributes_for :user, :allow_destroy => true validates_associated :user end class Employee < ActiveRecord::Base has_one :user, :as => :user_role, :dependent => :destroy accepts_nested_attributes_for :user, :allow_destroy => true validates_associated :user end I have some validation stuff in these classes as well. My problem is that if I try to create and Customer (or Employee etc) with a blank form I get all of the validation errors I should get plus some Generic ones like "User is invalid" and "Customer is invalid" If I iterate through the errors I get something like: user.login can't be blank User is invalid customer.whatever is blah blah blah...etc customer.some_other_error etc etc Since there is at least one invalid field in the nested User model, an extra "X is invalid" message is added to the list of errors. This gets confusing to my client and so I'm wondering if there is a quick way to do this instead of having to filer through the errors myself.

    Read the article

  • Accelerometer gravity components

    - by Dvd
    Hi, I know this question is definitely solved somewhere many times already, please enlighten me if you know of their existence, thanks. Quick rundown: I want to compute from a 3 axis accelerometer the gravity component on each of these 3 axes. I have used 2 axes free body diagrams to work out the accelerometer's gravity component in the world X-Z, Y-Z and X-Y axes. But the solution seems slightly off, it's acceptable for extreme cases when only 1 accelerometer axis is exposed to gravity, but for a pitch and roll of both 45 degrees, the combined total magnitude is greater than gravity (obtained by Xa^2+Ya^2+Za^2=g^2; Xa, Ya and Za are accelerometer readings in its X, Y and Z axis). More detail: The device is a Nexus One, and have a magnetic field sensor for azimuth, pitch and roll in addition to the 3-axis accelerometer. In the world's axis (with Z in the same direction as gravity, and either X or Y points to the north pole, don't think this matters much?), I assumed my device has a pitch (P) on the Y-Z axis, and a roll (R) on the X-Z axis. With that I used simple trig to get: Sin(R)=Ax/Gxz Cos(R)=Az/Gxz Tan(R)=Ax/Az There is another set for pitch, P. Now I defined gravity to have 3 components in the world's axis, a Gxz that is measurable only in the X-Z axis, a Gyz for Y-Z, and a Gxy for X-Y axis. Gxz^2+Gyz^2+Gxy^2=2*G^2 the 2G is because gravity is effectively included twice in this definition. Oh and the X-Y axis produce something more exotic... I'll explain if required later. From these equations I obtained a formula for Az, and removed the tan operations because I don't know how to handle tan90 calculations (it's infinity?). So my question is, anyone know whether I did this right/wrong or able to point me to the right direction? Thanks! Dvd

    Read the article

  • PHP File Upload, using the FILES superglobal array

    - by jnkrois
    Hello everybody, I just have a quick question. Let's say that I'm setting up an upload feature for a project. I'm using PHP5, and I was wondering if by using the $_FILES superglobal array, I could have access to the "tmp_name" array key , which is the temporary path and name of the file being uploaded to display a preview of the image, before moving it to the "uploads" folder in the server. What I had in mind is something like this: Use jQuery to detect when the "file" input filed changes (when the user selects an image file), then place an ajax call to upload the the file, which will move it to the temp folder even before I use "move_uploaded_file". If I can somehow access the "tmp_name" array key, I could probably put it into and "img" tag to display a preview. Later move the file to it's final location when the submit button is clicked. I'm not asking for any code examples or anything, since I'd be thrilled to accomplish this on my own, I just want to find out if there is access to that array key in any way. Thanks in advance.

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • How do I set up FirePHP version 1.0?

    - by jay
    I love FirePHP and I've been using it for a while, but they've put out this massive upgrade and I'm completely flummoxed trying to get it to work. I think I'm copying the "Quick Start" code (kind of guessing at whatever changes are necessary for my server configuration), but for some reason, FirePHP's "primary" function, FirePHP::to() isn't doing anything. Can anyone please help me figure out what I'm doing wrong? Thanks. <?php define('INSIGHT_IPS', '*'); define('INSIGHT_AUTHKEYS', '290AA9215205F24E5104F48D61B60FFC'); define('INSIGHT_PATHS', __DIR__); define('INSIGHT_SERVER_PATH', '/doc_root/hello_firephp2.php'); set_include_path(get_include_path . ":/home8/jayharri/php/FirePHP/lib"); // path to FirePHP library require_once('FirePHP/Init.php'); $inpector = FirePHP::to('page'); var_dump($inspector); $console = $inspector->console(); $console->log('hello firephp'); ?> Output: NULL Fatal error: Call to a member function console() on a non-object in /home8/jayharri/public_html/if/doc_root/hello_firephp2.php on line 14

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • Agile language for 2d game prototypes?

    - by instanceofTom
    Occasionally ( read: when my fiancé allows ) I like to prototype different game or game-like ideas I have. Usually I use Java or C# (not xna yet) because they are the languages I have the most practice with. However I would like to learn something more suited to agile development; a language in which it would be easier to knock out quick prototypes. At my job I have recently been working with looser (weak/dynamically typed) languages, specifically python and groovy, and I think something similar would fit what I am looking for. So, my question is: What languages (and framework/engine) would be good for rapidly developing prototypes of 2d game concepts? A few notes: I don't need blazing fast bitcrunching performance. In this case I would strongly prefer ease of development over performance. I'd like to use a language with a healthy community, which to me means a fair amount of maintained 3rd party, libraries. I'd like the language to be cross-platform friendly, I work on a variety of different operating systems and would like something that is portable with minimum effort. I can't imagine myself using a language with out decent options for debugging and editor syntax highlighting support. Note: If you are aware of a Java or C# library/framework that you think streamlines producing game prototypes I open to learning something new for those languages too

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • finding ALL cycles in a huge sparse matrix

    - by Andy
    Hi there, First of all I'm quite a Java beginner, so I'm not sure if this is even possible! Basically I have a huge (3+million) data source of relational data (i.e. A is friends with B+C+D, B is friends with D+G+Z (but not A - i.e. unmutual) etc.) and I want to find every cycle within this (not necessarily connected) directed graph. I've found this thread (http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) which has pointed me to Donald Johnson's (elementary) cycle-finding algorithm which, superficially at least, looks like it'll do what I'm after (I'm going to try when I'm back at work on Tuesday - thought it wouldn't hurt to ask in the meanwhile!). I had a quick scan through the code of the Java implementation of Johnson's algorithm (in that thread) and it looks like a matrix of relations is the first step, so I guess my questions are: a) Is Java capable of handling a 3+million*3+million matrix? (was planning on representing A-friends-with-B by a binary sparse matrix) b) Do I need to find every connected subgraph as my first problem, or will cycle-finding algorithms handle disjoint data? c) Is this actually an appropriate solution for the problem? My understanding of "elementary" cycles is that in the graph below, rather than picking out A-B-C-D-E-F it'll pick out A-B-F, B-C-D etc. but that's not the end of the world given the task. E / \ D---F / \ / \ C---B---A d) If necessary, I can simplify the problem by enforcing mutuality in relations - i.e. A-friends-with-B <== B-friends-with-A, and if really necessary I can maybe cut down the data size, but realistically it is always going to be around the 1mil mark. z) Is this a P or NP task?! Am I biting off more than I can chew? Thanks all, any help appreciated! Andy

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >