Search Results

Search found 20553 results on 823 pages for 'developer tools'.

Page 348/823 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Recent ALM Rangers Summary Posts

    - by Enrique Lima
    Willy-Peter Schaub has been a machine producing and posting content.  It is such a gem of information that you can find in the content being posted.  He has created some Summary Posts on the specific topics the ALM Rangers are working on. Here is the list of quick access TOC Posts. TOC: “Tags” a la acronyms … what do they all mean? TOC: TFS Integration Tools Blog Posts and Reference Sites TOC: TFS Iteration Automation Blog Posts and Reference Sites TOC: Virtual Machine (VM) Factory TOC: Build Customization Guide Blog Posts and Reference Sites TOC: Lab Management Guide Blog Posts and Reference Sites

    Read the article

  • Oracle: Addressing Information Overload in Factory Automation

    - by [email protected]
     ORACLE's Stephen Slade has written about addressing information overload on the factory floor.  According to Slade, today's automated processes create large amounts of valuable data, but only a small percentage remains actionable.Oracle claims information overload can cost financially, as companies struggle to store and collect reams of data needed to identify embedded trends, while producing manual reports to meet quality standards, regulatory requirements and general reporting goals.Increasing scrutiny of new requirements and standards add to the need to find new ways to process data. Many companies are now using analytical engines to contextualise data into 'actionable information'. Oracle claims factories need to seriously address their data collection, audit trail and records retention processes. By organising their data, factories can maximise outcomes from excellence and contuinuous improvement programs, and gain visibility into costs int the supply chain.Analytics tools and technologies such as Business Intelligence (BI), Enterprise Manufacturing Intelligence (EMI) and Manufacturing Operations Centers (MOC) can help consolidate, contextual and distribute information.   FULL ARICLE:  http://www.myfen.com.au/news/oracle--addressing-information-overload-in-factory

    Read the article

  • Application Lifecycle Management with Visual Studio 2010 – Wrox Book

    - by Guy Harwood
    After running with a somewhat disconnected set of tools (vs 2008, Ontime, sharepoint 2007) for managing our projects we decided to make the move to Team Foundation Server 2010.  With limited coverage of the product available online i went in search of a book and found this… View this book on the Wrox website I must point out that i have only read 10 of the 26 chapters so far, mainly the ones that cover source code control, work item tracking and database projects.  This enables our dev team to get familiar with it before switching project management over at a future date. Needless to say i am very impressed with the detail it provides, answering pretty much every question i had about TFS so far.  I'm looking forward to digging into the sections on testing, code analysis and architecture. Highly recommended.

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Survey: how do you unit test your T-SQL?

    - by Alexander Kuznetsov
    How do you unit test your T-SQL? Which libraries/tools do you use? What percentage of your code is covered by unit tests and how do you measure it? Do you think the time and effort which you invested in your unit testing harness has paid off or not? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    This is the twentieth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release.  Todays blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  Youll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • BDD IS Different to TDD

    - by Liam McLennan
    One of this morning’s sessions at Alt.NET 2010 discussed BDD. Charlie Pool expressed the opinion, which I have heard many times, that BDD is just a description of TDD done properly. For me, the core principles of BDD are: expressing behaviour in terms that show the value to the system actors Expressing behaviours / scenarios in a format that clearly separates the context, the action and the observations. If we go back to Kent Beck’s TDD book neither of these elements are mentioned as being core to TDD. BDD is an evolution of TDD. It is a specialisation of TDD, but it is not the same as TDD. Discussing BDD, and building specialised tools for BDD, is valuable even though the difference between BDD and TDD is subtle. Further, the existence of BDD does not mean that TDD is obsolete or invalidated.

    Read the article

  • links for 2010-04-01

    - by Bob Rhubart
    Jason Williamson: Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g Jason Williamson's update on new features in the latest release of Oracle Tuxedo 11g. (tags: otn oracle entarch) Jeanne Waldman: Using Oracle ADF Data Visualization Tools (DVT) Line Graphs to Display Weather Information Jeanne Waldman illustrates the nuts and bolts of modifications she made to a a simple JDeveloper Fusion application that retrieves weather data. I have a simple JDeveloper Fusion application that retrieves weather data. (tags: oracle otn virtualization jdeveloper ADF) Brian Harrison: Oracle WebCenter Interaction - New Release Overivew, Part 2 Brian Harrison continue his discussion of the next release of Oracle WebCenter Interaction with a look at at a few other new features. (tags: oracle otn enterprise2.0 webcenter)

    Read the article

  • How to create a "shutdown user" or "shutdown account"

    - by pcapademic
    Red Hat had a feature useful to me at the present time. There was an account, generally called "shutdown", and when you logged in with the account, the system shut down. In my specific case, I have Ubuntu Server running in a VM on my local system. The VM is running a web app, and when I'm done doing work, I want to shut down the VM. Unfortunately, I can't install VMware tools to get the "power button" based shutdown. Currently I login then sudo shutdown -h now, then type my password again, and things shutdown. Really, it's getting annoying all that waiting around and typing things. How do I replicate the "shutdown account" functionality in Ubuntu? A related question, were there any security gotchas that motivated people to stop using this kind of account?

    Read the article

  • End user query syntax?

    - by weberc2
    I'm making a command line tool that allows end users to query a statically-schemed database; however, I want users to be able to specify boolean matchers in their query (effectively things like "get rows where (field1=abcd && field2=efgh) || field3=1234"). I did Googling a solution, but I couldn't find anything suitable for end users--still, this seems like it would be a very common problem so I suspect there is a standard solution. So: What (if any) standard query "languages" are there that might be appropriate for end users? What (if any) de facto standards are there (for example, Unix tools that solve similar problems). Failing the previous two options, can you suggest a syntax that would be simple, concise, and easy to validate?

    Read the article

  • Succesful Hosted TFS Event at VISUG by Hassan Fadili at Microsoft Belgium

    - by hassanfadili
    On Tuesday November 22th, VISUG User Group has hosted an event at Microsoft Belgium about Hosted TFS by Hassan Fadili see http://www.visug.be/Eventdetails/tabid/95/EventId/48/Default.aspx. This event was very interactive and many as 60 people have taken part. The topic was about Build, Relase and Deploy with TFS2011 and MS Deploy. A combination of Slides and Demo's was perfect to explain this common mechanism for developers.To learn more about this topic check the earlier article pubished by Hassan Fadili for Software Developer Network Community at: http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/3199/Build-Release-and-Deploy-BRD-using-TFS2010-MS-Web-Deploy-and-WIX3X.aspxIf you have questions/Suggestions or thoughts about this topic, feel free to contact me by E-mail: [email protected] and/or via Twitter: @HassanFad

    Read the article

  • Sending Big Files with WCF

    - by Sean Feldman
    I had to look into a project that submits large files to WCF service. Implementation is based on data chunking. This is a good approach when your client and server are not both based on WCF, bud different technologies. The problem with something like this is that chunking (either you wish it or not) complicates the overall solution. Alternative would be streaming. In WCF to WCF scenario, this is a piece of cake. When client is Java, it becomes a bit more challenging (has anyone implemented Java client streaming data to WCF service?). What I really liked about .NET implementation with WCF, is that sending header info along with stream was dead simple, and from the developer point of view looked like it’s all a part of the DTO passed into the service. [ServiceContract] public interface IFileUpload { [OperationContract] void UploadFile(SendFileMessage message); } Where SendFileMessage is [MessageContract] public class SendFileMessage { [MessageBodyMember(Order = 1)] public Stream FileData; [MessageHeader(MustUnderstand = true)] public FileTransferInfo FileTransferInfo; }

    Read the article

  • Any Good Cocos2d Pause Menu Library

    - by Mahbubur R Aaman
    Background : From http://code.google.com/p/cocos2d-iphone/issues/detail?id=173 Scenes/Nodes doesn't support the CocosNodeOpacity protocol. From http://playsnackgames.com/blog/2011/09/cocos2d-tutorial-creating-a-reusable-pause-layer/ Cocos2d offers a simple method to pause and resume itself, but these methods stop the CCDirector (the class that manages most aspects of a Cocos2d’s app lifetime) from running actions and lower the fps to 5 to conserve battery life. Related issues http://www.cocos2d-iphone.org/forum/topic/4368 http://www.cocos2d-iphone.org/forum/topic/151 http://stackoverflow.com/questions/5852354/cocos2d-engine-pause-resume http://stackoverflow.com/questions/11878450/how-to-pause-a-layer-in-cocos2d-2-0 Question : Is there any Good Cocos2d Pause Menu Library solving these tricky issues? This will save many hours of Game Developer's life.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Screencasts introducing C++ AMP

    - by Daniel Moth
    It has been almost 2.5 years since I last recorded a screencast, and I had forgotten how time consuming they are to plan/record/edit/produce/publish, but at the same time so much fun to see the end result! So below are links to 4 screencasts to teach you C++ AMP basics from scratch (even if you class yourself as a .NET developer you'll be able to follow). Setup code - part 1 array_view, extent, index - part 2 parallel_for_each - part 3 accelerator - part 4 If you have comments/questions about what is shown in each video, please leave them at each video recoding. If you have generic questions about C++ AMP, please ask in the C++ AMP MSDN forum. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    Hello, I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Should we rename overloaded methods?

    - by Mik378
    Assume an interface containing these methods : Car find(long id); List<Car> find(String model); Is it better to rename them like this? Car findById(long id); List findByModel(String model); Indeed, any developer who use this API won't need to look at the interface for knowing possible arguments of initial find() methods. So my question is more general : What is the benefit of using overloaded methods in code since it reduce readability?

    Read the article

  • References about Game Engine Architecture in AAA Games

    - by sharethis
    Last weeks I focused on game engine architecture and learned a lot about different approaches like component based, data driven, and so on. I used them in test applications and understand their intention but none of them looks like the holy grail. So I wonder how major games in the industry ("AAA Games") solve different architecture problems. But I noticed that there are barely references about game engine architecture out there. Do you know any resources of game engine architecture of major game titles like Battlefield, Call of Duty, Crysis, Skyrim, and so on? Doesn't matter if it is an article of a game developer or a wiki page or an entire book. I read this related popular question: Good resources for learning about game architecture? But it is focused on learning books rather than approaches in the industry. Hopefully the breadth of our community can carry together certain useful informations! Thanks a lot! Edit: This question is focused but not restricted to first person games.

    Read the article

  • Why do browsers leak memory?

    - by Dane Balia
    A colleague and I were speaking about browsers (using a browser control object in a project), and it appears as plain as day that all browsers (Firefox, Chrome, IE, Opera) display the same characteristic or side-effect from their usage and that being 'Leaking Memory'. Can someone explain why that is the case? Surely as with any form of code, there should be proper garbage collection? PS. I've read about some defensive patterns on why this can happen from a developer's perspective. I am aware of an article Crockford wrote on IE; but why is the problem symptomatic of every browser? Thanks

    Read the article

  • Flex SDK the right tool for this project

    - by RWAC
    A client wants a site similar to this one (but different purpose): http://www.spokeo.com/search?q=Samantha+Dawes,+&s7=t30 where the user searches by name and a map is displayed with the count over each state. When the user clicks the count the list is displayed. I am a PHP developer (and have experience with C, C++, etc). Would Flex SDK, Flash Builder 4.5 for PHP, or Flash be the best tool? The Flex SDK http://www.adobe.com/products/flex.html looks promising and it looks like I can download it free without having to purchase Flash or Flex. Is that correct? Do you think this kind of project can be done with the Flex SDK? Without purchasing Flex or Flash? Thank you for taking the time to read this.

    Read the article

  • HTG Explains: Why You Shouldn’t Disable UAC

    - by Chris Hoffman
    User Account Control is an important security feature in the latest versions of Windows. While we’ve explained how to disable UAC in the past, you shouldn’t disable it – it helps keep your computer secure. If you reflexively disable UAC when setting up a computer, you should give it another try – UAC and the Windows software ecosystem have come a long way from when UAC was introduced with Windows Vista. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • App Store: Profitability for Game Developers

    - by Bunkai.Satori
    Recent days, I've been spending significant time in discovering chances of profitability of AppStore for developers. I have found many articles. Some of them are highly optimistic, while other are extremely skeptical. This article is extremely skeptical. It even claims to have backed its conclusions by objective sales numbers. This is another pesimistic article saying that games developed by single individuals get 20 downloads a day. Can I kindly ask to clarify from business viewpoint whether average developers publishing games and software on AppStore can cover their living expenses, even, whether they can become profitable? Is it achievable to generate revenues of 50.000 USD yearly on AppStore for a single developer? I would like to stay as realistic as possible. Despite the question might look subjective, a good business man will be able to esitmate chances for profitability and prosperity within AppStore.

    Read the article

  • Lync Server 2010

    - by ManojDhobale
    Microsoft Lync Server 2010 communications software and its client software, such as Microsoft Lync 2010, enable your users to connect in new ways and to stay connected, regardless of their physical location. Lync 2010 and Lync Server 2010 bring together the different ways that people communicate in a single client interface, are deployed as a unified platform, and are administered through a single management infrastructure. Workload Description IM and presence Instant messaging (IM) and presence help your users find and communicate with one another efficiently and effectively. IM provides an instant messaging platform with conversation history, and supports public IM connectivity with users of public IM networks such as MSN/Windows Live, Yahoo!, and AOL. Presence establishes and displays a user’s personal availability and willingness to communicate through the use of common states such as Available or Busy. This rich presence information enables other users to immediately make effective communication choices. Conferencing Lync Server includes support for IM conferencing, audio conferencing, web conferencing, video conferencing, and application sharing, for both scheduled and impromptu meetings. All these meeting types are supported with a single client. Lync Server also supports dial-in conferencing so that users of public switched telephone network (PSTN) phones can participate in the audio portion of conferences. Conferences can seamlessly change and grow in real time. For example, a single conference can start as just instant messages between a few users, and escalate to an audio conference with desktop sharing and a larger audience instantly, easily, and without interrupting the conversation flow. Enterprise Voice Enterprise Voice is the Voice over Internet Protocol (VoIP) offering in Lync Server 2010. It delivers a voice option to enhance or replace traditional private branch exchange (PBX) systems. In addition to the complete telephony capabilities of an IP PBX, Enterprise Voice is integrated with rich presence, IM, collaboration, and meetings. Features such as call answer, hold, resume, transfer, forward and divert are supported directly, while personalized speed dialing keys are replaced by Contacts lists, and automatic intercom is replaced with IM. Enterprise Voice supports high availability through call admission control (CAC), branch office survivability, and extended options for data resiliency. Support for remote users You can provide full Lync Server functionality for users who are currently outside your organization’s firewalls by deploying servers called Edge Servers to provide a connection for these remote users. These remote users can connect to conferences by using a personal computer with Lync 2010 installed, the phone, or a web interface. Deploying Edge Servers also enables you to federate with partner or vendor organizations. A federated relationship enables your users to put federated users on their Contacts lists, exchange presence information and instant messages with these users, and invite them to audio calls, video calls, and conferences. Integration with other products Lync Server integrates with several other products to provide additional benefits to your users and administrators. Meeting tools are integrated into Outlook 2010 to enable organizers to schedule a meeting or start an impromptu conference with a single click and make it just as easy for attendees to join. Presence information is integrated into Outlook 2010 and SharePoint 2010. Exchange Unified Messaging (UM) provides several integration features. Users can see if they have new voice mail within Lync 2010. They can click a play button in the Outlook message to hear the audio voice mail, or view a transcription of the voice mail in the notification message. Simple deployment To help you plan and deploy your servers and clients, Lync Server provides the Microsoft Lync Server 2010, Planning Tool and the Topology Builder. Lync Server 2010, Planning Tool is a wizard that interactively asks you a series of questions about your organization, the Lync Server features you want to enable, and your capacity planning needs. Then, it creates a recommended deployment topology based on your answers, and produces several forms of output to aid your planning and installation. Topology Builder is an installation component of Lync Server 2010. You use Topology Builder to create, adjust and publish your planned topology. It also validates your topology before you begin server installations. When you install Lync Server on individual servers, the installation program deploys the server as directed in the topology. Simple management After you deploy Lync Server, it offers the following powerful and streamlined management tools: Active Directory for its user information, which eliminates the need for separate user and policy databases. Microsoft Lync Server 2010 Control Panel, a new web-based graphical user interface for administrators. With this web-based UI, Lync Server administrators can manage their systems from anywhere on the corporate network, without needing specialized management software installed on their computers. Lync Server Management Shell command-line management tool, which is based on the Windows PowerShell command-line interface. It provides a rich command set for administration of all aspects of the product, and enables Lync Server administrators to automate repetitive tasks using a familiar tool. While the IM and presence features are automatically installed in every Lync Server deployment, you can choose whether to deploy conferencing, Enterprise Voice, and remote user access, to tailor your deployment to your organization’s needs.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >