Search Results

Search found 2693 results on 108 pages for 'keeping up'.

Page 16/108 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • SQL Compare-Like tool for Oracle?

    - by Hitchhiker
    We're a .NET team which uses the Oracle DB for a lot of reasons that I won't get into. But deployment has been a bitch. We are manually keeping track of all the changes to the schema in each version, by keeping a record of all the scripts that we run during development. Now, if a developer forgets to check-in his script to the source control after he ran it - which is not that rare - at the end of the iteration we get a great big headache. I hear that SQL Compare by Red-Gate might solve these kind of issues, but it only has SQL Server support. Anybody knows of a similar tool for Oracle? I've been unable to find one.

    Read the article

  • How do I attach an event handler to the document or window in GWT?

    - by Raph Levien
    In a GWT app centered around a canvas, I'm having trouble keeping focus directed in the right place - particularly for keyboard shortcuts. For now, I've wrapped the canvas in a FocusPanel, but that causes the canvas to not respond to the RequiresResize protocol, because FocusPanel does not plumb that. A second (related, I think) problem is that the FocusPanel is not getting Ctrl-A keypress events at all (tested on Mac Chrome). I can get Ctrl-Z and other keys (such as arrows) just fine. In a pure JavaScript world, I think the best answer to this would be to attach mouse and key handlers to the document or window object (I'm not positive which is better). However, I don't see an obvious way to do this in GWT - in particular, the Document and Window classes lack methods for attaching these kind of event handlers? Anyone know how to do it, or, perhaps, to solve the more general problem of keeping focus on an appropriate widget able to handle keyboard shortcuts?

    Read the article

  • I'm getting reports that my app is draining the battery, but only on HTC Incredible.

    - by BenTobin
    Has anyone else noticed behavior specific to the HTC Incredible that might result in an app keeping the device awake and busy? I've received reports that my app is responsible for keeping the device awake and draining the battery, but only from HTC Incredible users. My app responds to a number of Intents that might be related. It has a service that starts on boot, and it also responds to changes in network connectivity. I do use a WakeLock, but I'm careful to release it when I'm done processing, and the WakeLock times out after a couple minutes whether I'm done or not. I've heard that the Incredible may have some connectivity problems, or that some of them do. Could it be that the network is going up and down frequently, triggering my app? Any other ideas? I've asked a user to send me their logs with aLogCat. If they're willing, that may help narrow down the cause.

    Read the article

  • Detecting and reloading updated application parameters at runtime

    - by VeeKayBee
    I am working on an ASP.NET web application(using .NET 4.5 and C#).The application deals with lot of units (for measuring like KG,Litre,KM etc). So based on the selected unit we have to implement some allowed range.This values can be configured without much effort. We identified two solutions for this Keeping a configuration xml. Suppose the values in xml, does it requires an iisreset or any other thing which can take the site down for some time, if we are changing the xml file to change some validation. Keeping in Db, then use SQL dependency caching. So an update to DB can reflect the caching values.SO i believe if we change the values, it will update the cache. How much complex is this and does it effect the performance ? It will be great helpful, if we have some other method to achieve this. Thanks in advance.

    Read the article

  • Is this plain stupid: GIT Sharing Via DropBox?

    - by yar
    I realize that there are similar questions, but my question is slightly different. I'm wondering whether sharing a bare repository via a synchronized DropBox folder on multiple computers would work for sharing code via GIT. Really what I want to know is: is sharing a GIT repo via DropBox (the repo gets updated on each person's local drive) the same as sharing it from one centralized location, e.g., via SSH, git or HTTP? Is this the same or different from sharing a GIT repo via a shared network drive? Note: This is not an empirical question: it seems to work fine. I'm asking whether the way a GIT repo is structured is compatible with this way of sharing. EDIT To clarify/repeat, I'm talking about keeping the GIT repository on DropBox as a bare repository. I'm not talking about keeping the actual files that are under source control in DropBox.

    Read the article

  • 10 Useful CSS Tips And Tutorials

    - by Jyoti
    CSS is a technology that web designers use everyday, but yet it is something that most struggle with as well. Whether it’s keeping stylesheets for large sites manageable or creating image effects that are cross browser compatible, there are plenty of things to cause frustration. This article is an attempt to provide you with a [...]

    Read the article

  • How to keep up to date with Programming Blogs Aggregators

    - by landal79
    Last week I read a great post of Jeff Atwood Keeping Up and "Just In Time" Learning that speaks about how to keep update. The blog post reports Kathy Sierra list, the first item 'Find the best aggregators' has captured my attention. I'm used to look at DZone, IMHO a good aggregator. DZone has voting and tagging. Or recently I discovered Java Code Geeks. Are there any other good programming blog post aggregator?

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • T-SQL Tuesday #5: My First Cube

    - by Kalen Delaney
    It's time for the fifth T-SQL Tuesday , managed this time by Aaron Nelson of SQLVariations . Once again, the deadline came up just too quickly, and I'm on the road this week, so my entry will not be too long. Aaron's topic is reporting and in keeping with my past posts, this contribution will include a history lesson. Since I first learned SQL, I've always thought of aggregation as a way of producing simple reports. Summary information was frequently all that was needed on an ongoing basis to see...(read more)

    Read the article

  • T-SQL Tuesday #5: My First Cube

    - by Kalen Delaney
    It's time for the fifth T-SQL Tuesday , managed this time by Aaron Nelson of SQLVariations . Once again, the deadline came up just too quickly, and I'm on the road this week, so my entry will not be too long. Aaron's topic is reporting and in keeping with my past posts, this contribution will include a history lesson. Since I first learned SQL, I've always thought of aggregation as a way of producing simple reports. Summary information was frequently all that was needed on an ongoing basis to see...(read more)

    Read the article

  • Visual Studio 2010 and SQLCLR: Some Good, Some Bad

    - by Adam Machanic
    This past week I've been trying out Visual Studio 2010 for SQLCLR development. Verdict: A couple of nice things, a couple not so nice. In the interest of keeping things somewhat positive around here, we'll start with the good stuff : Pre-deployment and post-deployment scripts are built in. This is great, especially if you're working with features such as ordered TVFs, which Visual Studio 2008 never properly supported. In 2010 you can stick the ALTER FUNCTION in a post-deployment script and you'll...(read more)

    Read the article

  • What tools do you use to stay focused?

    - by Peter Turner
    This is related, but I'm thinking about something more like a chastity belt for keeping me from checking programmers.SE or my email every time I compile. Rather advice like "go take a walk and you'll feel more like coding", I just need something to augment my weak constitution - a net nanny for my geek fetish I guess. I'll take my answer off the air and I promise not to check programmers.SE for at least a day.

    Read the article

  • ID number for sites [closed]

    - by Jonathan
    Possible Duplicate: please add a key fields to stackauth results It would be easier if sites each had an ID, it help with keeping track of them, not only in a numerical way (which is generally easier and smaller than using name strings). Also changes of site names (such as when a site progresses from beta, or decides it's name is not quite right during beta). Everything else has IDs so why not sites?

    Read the article

  • ntpd server always in 'INIT' mode

    - by Deepak
    I'm running ntpd server in my ubuntu (10.04) machine. But it is always stays in the 'INIT' state as shown below. lyra@ws07475:~$ ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== europium.canoni .INIT. 16 u - 1024 0 0.000 0.000 0.000 lyra@ws07475:~$ Of course, this means that it is not keeping time. How can I start 'ntpd' server properly ? Please help.

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Good abbreviations for XML ... things

    - by Peter Turner
    I've never been very good at maintaining a coherent bunch of variable names for interfacing with XML files because I never name the variables in my interfaces the same way across my source. There are Elements, Attributes, Documents, NodeLists, Nodes, DocumentFragments and other stuff. What's a good scheme for keeping track of this stuff as variables? Is there a standard in regard to Hungarian notation? Do you even put anything signifying that the data is actually XML, is this bad practice?

    Read the article

  • Rule of thumb for enemy art design in 2D platformer

    - by Terrance
    I'm at the early stages of developing a 2D side scrolling open ended platformer (think Metroidvania) and am having a bit of difficulty at enemy design inspiration for something of a scifi, nature, fantasy setting that isn't overly familar or obvious. I haven't seen too many articles, blogs or books that talk about the subject at great length. Is there a fair rule of thumb when coming up with enemy art with respect to keeping your player engaged?

    Read the article

  • T-SQL Tuesday 24: Ode to Composable Code

    - by merrillaldrich
    I love the T-SQL Tuesday tradition, started by Adam Machanic and hosted this month by Brad Shulz . I am a little pressed for time this month, so today’s post is a short ode to how I love saving time with Composable Code in SQL. Composability is one of the very best features of SQL, but sometimes gets picked on due to both real and imaginary performance worries. I like to pick composable solutions when I can, while keeping the perf issues in mind, because they are just so handy and eliminate so much...(read more)

    Read the article

  • Comparing Table Variables with Temporary Tables

    This articles brings a comparison of temporary tables with table variables from SQL Server author, Wayne Sheffield. In includes an in-depth look at the differences between them. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >