Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 128/335 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Online architecture guide

    - by hunterman
    I am a newbie in gamedev, and I don't know about programmer's problems that can appear during development. So can you advice me some best practice for starting build new online multi-player game backend? I just saw reddraft server, and I think Spring library can also do some of its features. What is big difference? Do I need learn more spring or I have to use servers like reddraft or write these tools myself? I know that I need to learn hard and many - and the question is - what I should to learn now at the beginning? Thanks.

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt74tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • Can't load any applications

    - by oshirowanen
    After installing Eclipse, I wanted to try to enable the global unity menu for Eclipse. Following a guide on the web, I was to make a change to: sudo nano /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so by changing Eclipse to Xclipse. After making that change, and then running sudo ldconfig. and even restarting, I can no longer open any application. I just see the application appear on the screen for a split second and then it disappears. So far I have noticed that I can no longer load Eclipse and Chrome. Anyone know how to fix this? I have tried changing Xclipse back to Eclipse and running the command sudo ldconfig, but it makes no difference.

    Read the article

  • NoSQL : JSON, indexation distribuée et géoréplication débarquent dans Couchbase, le concurrent de MongoDB

    Base de données NoSQL : documents JSON, indexation distribuée et géoréplication débarquent dans Couchbase Le concurrent de MongoDB Couchbase Server, le système de gestion de bases de données NoSQL, vient de subir une mise à jour assez importante. La version 2.0 de Couchbase introduit un modèle de stockage de documents et un magasin clé-valeur (key-value), permettant à l'outil de faire un grand pas dans le support du Big Data (gros volumes de données). Pour rappel, CouchBase est un projet initialement basé sur le système noSQL Apache CouchDB, à la différence que le code Erlang de CouchDB a été entièrement réécrit en C++, avec des ajustements et ajouts en tirant profit du système de ...

    Read the article

  • Ubuntu 12.10 live-usb won't boot

    - by user109175
    I own an Aleutia "Tango" low power PC. It uses a Intel Atom CPU N2800 @1.86GHz processor. I know there are Linux issues with the "Cedar Trail" graphics on this hardware but I can happily run Ubuntu 12.04 using the VESA graphics driver. I wanted to try out Ubuntu 12.10 so I created a live USB under 12.04 but I am unable to get this to boot. It gets as far as the Grub screen but after that the monitor shuts down. I have tried the F6 "nomodeset" option but that doesn't make any difference. Does anyone have any knowledge of this problem? Thanks in advance.

    Read the article

  • How to write a network game?

    - by TomWij
    Based on Why is so hard to develop a MMO?: Networked game development is not trivial; there are large obstacles to overcome in not only latency, but cheat prevention, state management and load balancing. If you're not experienced with writing a networked game, this is going to be a difficult learning exercise. I know the theory about sockets, servers, clients, protocols, connections and such things. Now I wonder how one can learn to write a network game: How to balance load problems? How to manage the game state? How to keep things synchronized? How to protect the communication and client from reverse engineering? How to work around latency problems? Which things should be computed local and which things on the server? ... Are there any good books, tutorials, sites, interesting articles or other questions regarding this? I'm looking for broad answers, but specific ones are fine too to learn the difference.

    Read the article

  • Data Warehouse Workshop

    - by Davide Mauri
    I’m really really pleased to announce that it’s possible to register to the Data Warehouse Workshop that I and Thomas Kejser developed togheter.  Several months ago we decided to join forces in order to create a workshop that would contain not only the theoretical stuff, but also the experience we both have and all the best practices and lesson learned that can make the difference between a success and a failure when building a Data Warehouse. The first sheduled date is 7 February in Kista (Sweden): http://www.eventzilla.net/web/event?eventid=2138965081 and until 30th November there is the Super Early Bird to save more the 100€ (150$). The workshop will be very similar to the one I delivered at PASS Summit summit, with some extra technical stuff since it’s one hour longer. In addition to that for this first version both me and Thomas will be present, so it’s a great change  to make sure you super-charge your DW/BI project with insights that aren’t available anywhere else! If you’re into the BI field and you live in Europe, don’t miss this opportunity!

    Read the article

  • jpegexiforient does not seem to work in 12.04

    - by Pointy
    In Ubuntu 12.04, the jpegexiforient command doesn't seem to work. I've got a .jpg file for which exif clearly shows (correct) EXIF orientation information, but running jpegexiforient on the file returns nothing. I don't particularly care about that command, but it makes exifautotran not work. I could (and might) rewrite that script to use exif instead, but I'm just wondering if there's something dumb I'm just doing wrong. I found this somewhat old bug report but its suggestion of running exif to update the setting didn't make any difference. Does jpegexiforient work for anybody?

    Read the article

  • Gnome 3 - Old fashioned buttons and menus

    - by vigs1990
    I've upgraded to Gnome 3 and the problem I'm facing is that when I restart, sometimes the menus and buttons look old-fashioned like this: whereas sometimes, it looks modern and neat like this: Notice the differences between the two: here are a few differences: The menu bar (notice the difference in fonts, dark grey color of Snapshot1 vs the light grey color in Snapshot2 in the background) The file navigation bar bellow the menu bar (notice the 'Home' button there and also the left arrow button) The left-hand side navigation bar (font, background color and color of selected folder) The old style look effects the GTK aspects of the interface, such as the menu, buttons, mouse pointer etc. Another observation is that changing the GTK themes does using gnome-tweak-tool when the old style look is loaded does NOT work. However, this works when the regular look is loaded. How can I ensure that the old-fashioned look does not load on boot?

    Read the article

  • How do you show the desktop in a blink in Ubuntu?

    - by e-satis
    We know you can either click on the show desktop icon or use CTRL+ALT+D to ask Ubuntu to show the desktop. Unfortunately, this does not always show the desktop in one action. Sometime, and this is true for at least the last 4 version of the OS, it brings up first to the front all the windows, THEN, with a second click, show you the desktop. This is very annoying, as when you show the desktop it generally to quickly click on a shortcut. To understand what I'm talking about, open 7 windows, minimize some, bring some to the front, maximize one, then show the desktop. Then do that on Windows. You'll see the difference.

    Read the article

  • Applications: The Mathematics of Movement, Part 3

    - by TechTwaddle
    Previously: Part 1, Part 2 As promised in the previous post, this post will cover two variations of the marble move program. The first one, Infinite Move, keeps the marble moving towards the click point, rebounding it off the screen edges and changing its direction when the user clicks again. The second version, Finite Move, is the same as first except that the marble does not move forever. It moves towards the click point, rebounds off the screen edges and slowly comes to rest. The amount of time that it moves depends on the distance between the click point and marble. Infinite Move This case is simple (actually both cases are simple). In this case all we need is the direction information which is exactly what the unit vector stores. So when the user clicks, you calculate the unit vector towards the click point and then keep updating the marbles position like crazy. And, of course, there is no stop condition. There’s a little more additional code in the bounds checking conditions. Whenever the marble goes off the screen boundaries, we need to reverse its direction.  Here is the code for mouse up event and UpdatePosition() method, //stores the unit vector double unitX = 0, unitY = 0; double speed = 6; //speed times the unit vector double incrX = 0, incrY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     double x = e.X - marble1.x;     double y = e.Y - marble1.y;     //calculate distance between click point and current marble position     double lenSqrd = x * x + y * y;     double len = Math.Sqrt(lenSqrd);     //unit vector along the same direction (from marble towards click point)     unitX = x / len;     unitY = y / len;     timer1.Enabled = true; } private void UpdatePosition() {     //amount by which to increment marble position     incrX = speed * unitX;     incrY = speed * unitY;     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;         unitX *= -1;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;         unitX *= -1;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;         unitY *= -1;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;         unitY *= -1;     } } So whenever the user clicks we calculate the unit vector along that direction and also the amount by which the marble position needs to be incremented. The speed in this case is fixed at 6. You can experiment with different values. And under bounds checking, whenever the marble position goes out of bounds along the x or y direction we reverse the direction of the unit vector along that direction. Here’s a video of it running;   Finite Move The code for finite move is almost exactly same as that of Infinite Move, except for the difference that the speed is not fixed and there is an end condition, so the marble comes to rest after a while. Code follows, //unit vector along the direction of click point double unitX = 0, unitY = 0; //speed of the marble double speed = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     double x = 0, y = 0;     double lengthSqrd = 0, length = 0;     x = e.X - marble1.x;     y = e.Y - marble1.y;     lengthSqrd = x * x + y * y;     //length in pixels (between click point and current marble pos)     length = Math.Sqrt(lengthSqrd);     //unit vector along the same direction as vector(x, y)     unitX = x / length;     unitY = y / length;     speed = length / 12;     timer1.Enabled = true; } private void UpdatePosition() {     marble1.x += speed * unitX;     marble1.y += speed * unitY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;         unitX *= -1;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;         unitX *= -1;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;         unitY *= -1;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;         unitY *= -1;     }     //reduce speed by 3% in every loop     speed = speed * 0.97f;     if ((int)speed <= 0)     {         timer1.Enabled = false;     } } So the only difference is that the speed is calculated as a function of length when the mouse up event occurs. Again, this can be experimented with. Bounds checking is same as before. In the update and draw cycle, we reduce the speed by 3% in every cycle. Since speed is calculated as a function of length, speed = length/12, the amount of time it takes speed to reach zero is directly proportional to length. Note that the speed is in ‘pixels per 40ms’ because the timeout value of the timer is 40ms.  The readability can be improved by representing speed in ‘pixels per second’. This would require you to add some more calculations to the code, which I leave out as an exercise. Here’s a video of this second version,

    Read the article

  • Using "prevent execution of method" flags

    - by tpaksu
    First of all I want to point out my concern with some pseudocode (I think you'll understand better) Assume you have a global debug flag, or class variable named "debug", class a : var debug = FALSE and you use it to enable debug methods. There are two types of usage it as I know: first in a method : method a : if debug then call method b; method b : second in the method itself: method a : call method b; method b : if not debug exit And I want to know, is there any File IO or stack pointer wise difference between these two approaches. Which usage is better, safer and why?

    Read the article

  • C++: Checking if an object faces a point (within a certain range)

    - by bojoradarial
    I have been working on a shooter game in C++, and am trying to add a feature whereby missiles shot must be within 90 degrees (PI/2 radians) of the direction the ship is facing. The missiles will be shot towards the mouse. My idea is that the ship's angle of rotation is compared with the angle between the ship and the mouse (std::atan2(mouseY - shipY, mouseX - shipX)), and if the difference is less than PI/4 (45 degrees) then the missile can be fired. However, I can't seem to get this to work. The ship's angle of rotation is increased and decreased with the A and D keys, so it is possible that it isn't between 0 and 2*PI, hence the use of fmod() below. Code: float userRotation = std::fmod(user->Angle(), 6.28318f); if (std::abs(userRotation - missileAngle) > 0.78f) return; Any help would be appreciated. Thanks!

    Read the article

  • Is there any research about daily differences in productivity by the same programmer?

    - by Rice Flour Cookies
    There has been a flurry of activity on the internet discussing a huge difference between the productivity of the best programmers versus the productivity of the worst. Here's a typical Google result when researching this topic: http://www.devtopics.com/programmer-productivity-the-tenfinity-factor/ I've been wondering if there has been any research or serious discussion about differences in day-to-day productivity by the same programmer. I think that personally, there is a huge variance in how much I can get done on a day by day basis, so I was wondering if anyone else feels the same way or has done any research.

    Read the article

  • Severity and relation to occurence - priority?

    - by user970696
    I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says: The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately). Also from this description, what is the difference between the effect on the end user and business impact?

    Read the article

  • What is the ideal laptop for creative coding applications?

    - by Jason
    Hi, I am a creative coder using C++(cinder and OpenFrameworks) I am looking to upgrade from my MacBook, which slowed down to about 3fps this morning. My project involves particles systems and fluids reacting to audio analysis data and computer vision data in real-time. SD or HD? no biggie. I have asked many people what computer I need. Ideally, I want a MacBook Pro. But is that enough power? I've been told that I need a desktop for what I am doing though I'd rather stay portable I've been told that I should go PC linux to get the most power but I'd rather stay mac I've been told that RAM is more of bottleneck than processor speed I've been told that the Graphics Card is more important than CPU and that code optimizations such as using trees over lists, proper threading, sending tasks to the GPU make a bigger difference than the hardware!!! what's true?! what do I need? Any suggestions are greatly appreciated

    Read the article

  • Is imposing the same code format for all developers a good idea?

    - by Stijn Geukens
    We are considering to impose a single standard code format in our project (auto format with save actions in Eclipse). The reason is that currently there is a big difference in the code formats used by several (10) developers which makes it harder for one developer to work on the code of another developer. The same Java file sometimes uses 3 different formats. So I believe the advantage is clear (readability = productivity) but would it be a good idea to impose this? And if not, why? UPDATE We all use Eclipse and everyone is aware of the plan. There already is a code format used by most but it is not enforced since some prefer to stick to their own code format. Because of the above reasons some would prefer to enforce it.

    Read the article

  • Steam, launching Team Fortress 2: libGL.so.1: wrong ELF class: ELFCLASS64

    - by stefanhgm
    After I got Steam running with the workaround mentioned here, I've got nearly the same problem when launching Team Fortress 2. After starting it from Steam the "Launcher" pops up and after a few seconds it disappears with the following error in the terminal: /home/user/Steam/SteamApps/steamuser/Team Fortress 2/hl2_linux: error while loading shared libraries: libGL.so.1: wrong ELF class: ELFCLASS64 Game removed: AppID 440 "Team Fortress 2", ProcID 5299 saving roaming config store to 'sharedconfig.vdf' roaming config store 2 saved successfully Because of the similarity with the workaround I used before, I tried to execute: export LD_LIBRARY_PATH=/usr/lib32:$LD_LIBRARY_PATH directly before launching the game, but there is no difference.

    Read the article

  • How to organize unit/integration test in BDD

    - by whatf
    So finally after reading a lot, I have understood that the difference between BDD and TDD is between T & B. But coming from basic TDD background, what I used to was, first write unittest for database models write test for views (at this point start with integration test as well, along with unittests) write more integration tests for testing UI stuff. What would be a correct way to approach BDD. Say I have a simple blog application. Given : When a user logs in. He should be shown list of all his posts. But for this, I need a model with a row user, another row blog posts. So how do we go about writing tests? when do we create fixtures? When do we write integration (selenium) tests?

    Read the article

  • How to determine if class meets single responsibility principle ?

    - by user1483278
    Single Responsibility Principle is based on high cohesion principle. The difference between the two is that highly cohesive classes feature a set of responsibilities that are strongly related, while classes adhering to SRP have just one responsibility. But how do we determine whether particular class features a set of responsibilities and is thus just highly cohesive, or whether it has only one responsibility and is thus adhering to SRP? Namely, isn't it more or less subjective, since some may find class very granular ( and as such will consider a class as adhering to SRP ), while others may find it not granular enough?

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Programmer logbook application?

    - by jsoldi
    I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The ideal would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?

    Read the article

  • My Oracle Support 6.3 - Knowledge Highlights

    - by JanSyss
    My Oracle Support 6.3 was released over the weekend (13-Oct-2012), and with that we released 30+ enhancements and 60+ bug fixes. Most important changes Search Suggestions are auto-correcting spelling errors, more suggestions for 'how to' type questions, enhanced usability to see the suggested additional terms. Improved Knowledge Base region on the My Oracle Support dashboard: recent searches from this region now retain the search attributions (e.g. pre-selected products or release). Search Tip: if the Knowledge Base region doesn't show up as the first region in the right column on the My Oracle Support dashboard, consider personalizing your dashboard to put it first, so that you right there for searching. Specifying the product you are researching an issue for, with optionally version and task as well, makes searches in the majority of the cases more precise. for more information, see my comments in my previous blog on the topic: https://blogs.oracle.com/supportportal/entry/mos_6_2_release Better support for searches on ORA-600 & ORA-700: no longer a difference in results between searching on 'ORA-600 [Arg1]' and 'Ora-00600: Internal Error Code, Arguments: [Arg1]'.

    Read the article

  • Callbacks: when to return value, and when to modify parameter?

    - by MarkN
    When writing a callback, when is best to have the callback return a value, and when is it best to have the callback modify a parameter? Is there a difference? For example, if we wanted to grab a list of dependencies, when would you do this: function GetDependencies(){ return [{"Dep1" : 1.1}, {"Dept2": 1.2}, {"Dep3" : 1.3}]; } And when would you do this? function RegisterDependencies(register){ register.add("Dep1", 1.1); register.add("Dep2", 1.2); register.add("Dep3", 1.3); }

    Read the article

  • Strange 401 (Unauthorized) when calling a WCF Service

    - by mipsen
    A WCF Service we call from BizTalk using WCF BasicHTTP usually works fine but all of a sudden it started returning 401 errors for some calls while others continued working as expected so it could not have been a "real" 401. The difference was the size of the message. One parameter of the service is a rather complex object. In the cases we got a 401 it got quite big (containing a lot of customer-data), say 5 MB. So we turned on tracking. The messages we traced out where about 20MB. Not too big for WCF one should suppose... A bit of research led us to increasing maxItemsInObjectGraph in the behaviours but that did not help. The service we call is in the same network as we are and is a WCF service. So we tried changing from BasicHTTP to net.tcp and Bingo! Ok, we had to use CustomBinding in BizTalk to set all the Quotas, etc. but it worked in the end.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >