Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 653/1381 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Windows Azure : suivez le streaming du Dev Camp en direct sur Developpez.com

    Le 20 juin aura lieu la journée Dev Camp consacrée à Azure. [IMG]http://i.msdn.microsoft.com/hh868108.azure-camps(fr-fr,MSDN.10).png[/IMG] Cette journée est l'occasion de découvrir tous les services Cloud d'Azure (SQL Azure, Stockage avec Windows Azure Storage, Back-end, etc.), d'apprendre comment réaliser des projets et héberger des applications ? ou des sites webs - sur la plateforme. L'Azure Dev Camp abordera également les applications multi-tiers et la manière de « migrer, intégrer et étendre votre code et vos applications existantes grâce à Windows Azure ». Cette journée abordera aussi la construction d'APIs Web pour enrichir des applications mobiles iOS, Android et bien sûr Windows Phone. Enfin, le rendez-vous...

    Read the article

  • What sort of leaderboard for my game?

    - by Martin
    I recently published a word game for Windows Phone and I am really happy to have some players. The game is entirely offline and at the end of a game, the player's score is published to a server. I'm collecting the scores to build a leaderboard. Right now, I don't believe that the leaderboard I offer to my users is appropriate. I essentially accumulate the score of all the games of a user for a given day and that becomes their score. So if Player 1 plays 3 games and gets 100, 150 and 200 points, its score for the day is 450 points. I would like to get your ideas and opinion. How do I keep my game challenging and engaging with a good leaderboard? Should I continue accumulating the score for a day? Should I just keep the best score? Thanks!

    Read the article

  • Is there a way to check if redistributed code has been altered?

    - by onlineapplab.com
    I would like to redistribute my app (PHP) in a way that the user gets the front end (presentation) layer which is using the API on my server through a web service. I want the user to be able to alter his part of the app but at the same time exclude such altered app from the normal support and offer support on pay by the hour basis. Is there a way to check if the source code was altered? Only solution I can think of would be to get check sums of all the files then send it through my API and compare them with the original app. Is there any more secure way to do it so it would be harder for the user to break such protection?

    Read the article

  • Is it possible to remove a particular host key from SSH's known_hosts file?

    - by Kaustubh P
    I usually end up deleting the entire known_hosts file, which I have no problems for. But just out of curiosity, Is it possible to remove just a single entry? I opened the known_hosts file, and other han understanding that the file contains fingerprints for a given machine, I dont understand anything. Below is the message I faced, which led me to ask this question. Add correct host key in /home/wissen16/.ssh/known_hosts to get rid of this message. Offending key in /home/wissen16/.ssh/known_hosts:1 RSA host key for foo.com has changed and you have requested strict checking. Host key verification failed. Thanks.

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Getting into C# and MVC4 coming from Javascript

    - by Stefan V.
    Let me know if this is the wrong place to ask this but, I am trying to get into a backend/server language coming from a front-end javascript background (vanilla, angular, jQuery and a bit of node and mongodb, also some experience with PHP and MySQL). Why C#? My company's entire server-side is MVC4. Occasionally, I am going through commits of the backend guys and have asked them all sorts of questions. A lot of what I have heard and seen just seems appealing. Anyway, I'd rather start with C# first and gradually adopt .NET MVC. Does anybody advice, tips, recommended books, etc for somebody trying to learn C# coming from a JS background?

    Read the article

  • Is it bad practice to output from within a function?

    - by Nick
    For example, should I be doing something like: <?php function output_message($message,$type='success') { ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php } output_message('There were some errors processing your request','error'); ?> or <?php function output_message($message,$type='success') { ob_start(); ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php return ob_get_clean(); } echo output_message('There were some errors processing your request','error'); ?> I understand they both achieve the same end result, but are there benefits doing one way over the other? Or does it not even matter?

    Read the article

  • How to add a sound that an enemy AI can hear?

    - by Chris
    Given: a 2D top down game Tiles are stored just in a 2D array Every tile has a property - dampen (so bricks might be -50db, air might be -1) From this I want to add it so a sound is generated at point x1, y1 and it "ripples out". The image below kind of outlines it better. Obviously the end goal is that the AI enemy can "hear" the sound - but if a wall is blocking it, the sound doesn't travel as far. Red is the wall, which has a dampen of 50db. I think in the 3rd game tick I am confusing my maths. What would be the best way of implementing this?

    Read the article

  • DDDSouthWest 4.0 26th May 2012 - Async 20/20 presentation

    - by Liam Westley
    As I wasn’t voted in with my nominated sessions I presented a 20/20 talk on the new async functionality coming with the .Net Framework.  This was based on the PechaKucha presentation format, where you have only 20 slides with only 20 seconds per slide, and it progresses automatically. It was the first I’d attempted, so thanks to the organisers for allowing me to have a go. Although creating the slide deck was definitely easier than a one hour presentation, it was much more stressful giving the talk by the end of the 6m 40s. I’m not going to upload the slide deck (it won’t make much sense) but I did record the audio and used the excellent Camtasia to create a video of the slide deck with that audio which you can watch over here, https://vimeo.com/42957952

    Read the article

  • SEI Turns Software Architecture into a Game

    - by Bob Rhubart-Oracle
    "Architecture is the decisions that you wish you could get right early in a project." -- Ralph E. Johnson Unless you can see into the future, getting those decisions right comes down to a collection of hard choices. But the Software Engineering Institute (SEI) of Carnegie Mellon University has turned those hard choices into a game. Literally. According to the SEI website: The Hard Choices game is a simulation of the software development cycle meant to communicate the concepts of uncertainty, risk, options, and technical debt. In the quest to become market leader, players race to release a quality product to the marketplace. By the end of the game, everyone has experienced the implications of investing effort to gain an advantage or of paying a price to take shortcuts, as they employ design strategies in the face of uncertainty.   Check it out for yourself: Download the Hard Choices Board Game Download the companion white paper: The Hard Choices Game Explained

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Enterprise Trade Compliance: Changing Trade Operations around the World

    - by John Murphy
    We live in a world of incredible bounty and speed where any product can be delivered anywhere on earth. However, our world is also filled with challenges for business – where volatility, uncertainty, risk, and chaos are our daily companions. To prosper amid the realities of this new world, organizations cannot rely on old strategies; they need new business models. Key trends within the global economy are mandating that companies fully integrate global trade management best practices within broader supply chain management strategies, rather than simply leaving it as a discrete event at the end of the order or procurement cycle. To explain, many companies face a complicated and changing compliance environment. This is directly linked to the speed and configuration of the supply chain, particularly with the explosion of new markets, shorter service cycles and ship times, accelerating rates of globalization and outsourcing, and increasing product complexity and regulation. Read More...

    Read the article

  • Google's process for publishing/modifying pages [closed]

    - by Glenn Dayton
    I'm assuming that a group of people at Google have control of certain sections of google.com, but how does Google make sure that employees don't accidentally or intentionally sabotage the website? Does Google use Adobe Contribute or some similar product for sharing/publishing the website. Do employees use WebDAV, FTP, SFTP, or SSH to publish the site. Since Google has hundreds of thousands of servers it probably takes some time for its servers to update. Do they transmit the new copy of the website to all servers before publishing at once? This question does not apply to Google editing a database and having a page reflect the database's changes. It applies to employees editing the source code and/ or back end of the site.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Make the Taskbar Buttons Switch to the Last Active Window in Windows 7

    - by The Geek
    The new Windows 7 taskbar’s Aero Peek feature, with the live thumbnails of every window, is awesome… but sometimes you just want to be able to click the taskbar button and have the last open window show up instead. Here’s a quick hack to make it work better. To better understand the problem, imagine having nine windows of the same type open on your screen, but you are primarily working in just one of the windows at a time. So every time you want to switch back, you have to click the taskbar button, and then choose the one you are using from the list, which can be pretty annoying… Now if you know your Windows 7 shortcuts, you’d know that you can simply hold down the Ctrl key while clicking on the taskbar button, and the last window will show up. In fact, you can keep holding down the Ctrl key and keep clicking, and Windows will cycle through the open windows. It’s a useful shortcut, but hardly something you want to do every single time. Instead, we’ll use a quick registry hack to make the normal click switch to the last open window—if you still want to see the thumbnail list, just hover your mouse over the button for half a second to see the full list. Manual Registry Hack for Last Active Window Open up regedit.exe through the start menu search or run box, and then head down to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced Once you’re there, create a new 32-bit DWORD value on the right hand side, give it the name LastActiveClick, and set the value to 1. Once you are done, it should look something like this: Once you are done, you’ll have to log off and back on, or you can kill Explorer.exe through Task Manager and re-open it. Download the Registry Hack Instead Since you probably don’t feel like registry hacking, we’ve provided you an easy downloadable version. You can simply download the file, extract it, and then double-click on the LastActiveClick.reg file. Once you are done, you’ll have to log off and back on, just like with the manual registry hack. Download LastActiveClick Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Make the Windows 7 Taskbar Work More Like Windows XP or VistaStupid Geek Tricks: Select Multiple Windows on the TaskbarReorganize Your Taskbar Buttons and Tray Icons in XP/VistaKeyboard Ninja: Create a Hotkey to Switch to Your Open Outlook WindowTaskbar Eliminator Does What the Name Implies: Hides Your Windows Taskbar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Scheme of work contract

    - by Tommy
    I'm in the process of setting up a (one man) company and got to a item on my list "contracts and insurance". I will primary be offering custom software development but may also offer some "open source solutions", install, configure / manage e.t.c. I'm not sure how to approach a contract with a potential customer. I want to be flexible and offer the customer rights over the product (when not using open source of course) but I obviously want / need to be able to reuse code, already written and any future work. Is this possible or is it just something that people do but strictly they shouldn't? Is there a standard freelancing / contacting developer agreement? Going to a lawyer I'm sure is an answer but a very expensive one! If not do you end up with a fresh contract with each job / client and lots of trips to solicitors?

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Web development for people who mainly do client side..

    - by kamziro
    Okay, I'm sure there are a lot of us that has plenty of experience developing c++/opengl/objective C on the iPhone, java development on android, python games, etc (any client side stuff) while having little to no experience on web-based development. So what skillset should one learn in order to be able to work on web projects, say, to make a facebook clone (I kid), or maybe a startup that specializes on connecting random fashionistas with pics etc. I actualy do have some experience with C#/VB.net back-end development a while back, but as part of a team, I had a lot of support from the senior devs. Is C# considered a decent web development language?

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • How to disable OSD notifications for specific email addresses?

    - by Zuul
    I receive several logs per day via email, either from testing operations, server status or backup resumes. Can I some how disable the notification balloon either by a specific email address, the email subject or any other possible filter, as to not be constantly notified about the success operations? I've looked within the Notify OSD Configuration, but couldn't find any option to this end. Since the email comes with different information when errors occur, I already have filters within Thunderbird as to move error logs to a folder that I keep tabs on... If relevant, I'm using Ubuntu 12.04 fully updated, and Thunderbird 15.0.1.

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >