Search Results

Search found 32141 results on 1286 pages for 'development hardware'.

Page 371/1286 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • javascript fixed timestep gameloop with requestanimation frame

    - by coffeecup
    hello i just started to read through several articles, including http://gafferongames.com/game-physics/fix-your-timestep/ ...://gamedev.stackexchange.com/questions/1589/fixed-time-step-vs-variable-time-step/ ...//dewitters.koonsolo.com/gameloop.html ...://nokarma.org/2011/02/02/javascript-game-development-the-game-loop/index.html my understanding of this is that i need the currentTime and the timeStep size and integrate all states to the next state the time which is left is then passed into the render function to do interpolation i tried to implement glenn fiedlers "the final touch", whats troubling me is that each FrameTime is about 15 (ms) and the update loop runs at about 1500 fps which seems a little bit off? heres my code this.t = 0 this.dt = 0.01 this.currTime = new Date().getTime() this.accumulator = 0.0 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if ( frameTime > 0.25 ) frameTime = 0.25 this.currTime = newTime this.accumulator += frameTime while (this.accumulator >= this.dt ) { this.prev_state = this.curr_state this.update(this.t,this.dt) this.t += this.dt this.accumulator -= this.dt } alpha = this.accumulator / this.dt this.render( this.t, this.dt, alpha) requestAnimationFrame( this.animate ) } also i would like to know, are there differences between glenn fiedlers implementation and the last solution presented here ? gameloop1 gameloop2 [ sorry couldnt post more than 2 links.. ] edit : i looked into it again and adjusted the values this.currTime = new Date().getTime() this.accumulator = 0 this.p_t = 0 this.p_step = 1000/100 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if(frameTime > 25) frameTime = 25 this.currTime = newTime this.accumulator += frameTime while(this.accumulator >= this.p_step){ // prevstate = currState this.update() this.p_t+=this.p_step this.accumulator -= this.p_step } alpha = this.accumulator / this.p_step this.render(alpha) requestAnimationFrame( this.animate ) now i can set the physics update rate, render runs at 60 fps and physics update at 100 fps, maybe someone could confirm this because its the first time i'm playing around with game development :-)

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Game software design

    - by L. De Leo
    I have been working on a simple implementation of a card game in object oriented Python/HTML/Javascript and building on the top of Django. At this point the game is in its final stage of development but, while spotting a big issue about how I was keeping the application state (basically using a global variable), I reached the point that I'm stuck. The thing is that ignoring the design flaw, in a single-threaded environment such as under the Django development server, the game works perfectly. While I tried to design classes cleanly and keep methods short I now have in front of me an issue that has been keeping me busy for the last 2 days and that countless print statements and visual debugging hasn't helped me spot. The reason I think has to do with some side-effects of functions and to solve it I've been wondering if maybe refactoring the code entirely with static classes that keep no state and just passing the state around might be a good option to keep side-effects under control. Or maybe trying to program it in a functional programming style (although I'm not sure Python allows for a purely functional style). I feel that now there's already too many layers that the software (which I plan to make incredibly more complex by adding non trivial features) has already become unmanageable. How would you suggest I re-take control of my code-base that (despite being still only at < 1000 LOC) seems to have taken a life of its own?

    Read the article

  • Alternatives to a leveling system

    - by Bane
    I'm currently designing a rough prototype of a mecha fighting game. These are the basics I came up with: Multiplayer (matchmaking for up to 10 people, for now) Browser based (HTML5) 2D (<canvas>) Persistent (as in, players have accounts and don't have to use a new mech each time they start a match) Players earn money upon destroying another mech, which is used to buy parts (guns, armor, boosters, etc) Simplicity (both of the game itself, and of the development of said game) No "leveling" (as in, players don't get awarded with XP) The last part is bothering me. At first, I wanted to have players gain experience points (XP) when destroying other mechs, but gaining two things at once (money and XP) seemed to be in conflict with my last point, which is simplicity. If I were to have a leveling system, that would require additional development. But, the biggest problem is that I simply couldn't fit it anywhere! Adding levels would require adding meaning to these levels, and most of the things that I hoped to achieve could already be achieved with the money mechanic I introduced. So I decided to drop leveling off completely. That, in turn, removed a fairly popular and robust mean of progression in games from my game (not that I would use it well anyway). Is there another way of progression in games, aside from leveling and XP points, that wouldn't get rendered redundant by my money mechanic, would be somehow meaningful (even on a symbolic level), and wouldn't be in conflict with my last point, which is simplicity?

    Read the article

  • How to implement a birds eye view of 2D Grid Map using Android

    - by IM_Adan
    I'm a true beginner with using the android platform and I'm having difficulties on implementing a 2D grid system for a tower defense type game. Where I can place towers on a specific tile and enemies will be able to traverse through tiles etc. What I would like is a practical explanation of how I could tackle this. A step by step guide for dummies. This is what I believe are the necessary steps to take, I think I might be wrong but I hope someone could help me out. Calculate the Width and Height of the view I'm working with. Based on that, determine the number of tiles required and their dimensions, (Still not sure how I would do this) Create each tile as a Rectangle object and draw these rectangle on a canvas I would really be grateful if someone could steer me in the right direction on how to implement a 2D Grid Map using android. I hope the answer to this questions helps the TRUE beginners out there like me. I have looked at the following links below yet I still feel that I don't trully understand what's going on. For XNA: 2D Grid based game - how should I draw grid lines? How to Create a Grid for a 2D Game? Also a quick note: All my previous game development has been in Java, mostly using Java SE and Swing. I also have good understanding of the game development process, it is only android thats confusing me :S

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • Is there a portal dedicated to HTML5 games?

    - by Bane
    Just to get something straight; by "portal", I mean a website that frequently publishes a certain type of games, has a blog, some articles, maybe some tutorials and so on. All of these things are not required (except the game publishing part, of course), for example, I consider Miniclip to be a flash game portal. The reason for defining this term is because I'm not sure if other people use it in this context. I recently (less than a year ago) got into HTML5 game development, nothing serious, just my own small projects that I didn't really show to a lot of people, and that certainly didn't end up somewhere on the web (although, I am planning to make a website for my next game). I am interested in the existence of an online portal where indie devs (or non-indie ones, doesn't really matter that much) can publish their own games, sort of like "by devs for devs", also a place where you can find some simple tutorials on basic HTML5 game development and so on... I doubt something like this exists for several reasons: You can't really commercialize an HTML5 game without a strong server-side and microtransactions The code can be easily copied HTML5 is simply new, and things need time to get their own portals somewhere... If a thing like this does not exist, I think I might get into making one some day...

    Read the article

  • Does use of simple shaders improve performace/battery life?

    - by Miro
    I'm making OpenGL game for Android. Till now i've used only fixed function pipeline, but i'm rendering simple things. Fixed function pipeline includes a lot of stuff i don't need. So i'm thinking about implementing shaders in my game to simplify OpenGL pipeline if it can make better performance. Better performance = better battery life, unless fps is limited by software limit, not hardware power.

    Read the article

  • Porting Ruby/NCruses Rogue-Like to .NET and FlatRedBall

    - by ashes999
    I created an awesome rogue-like game in Ruby. For the GUI, I used NCurses. Since I'm using FlatRedBall as my engine of choice for Silverlight game development, I want to port this game over. What is the best way to efficiently doing this, and what are the pitfalls I should expect? For example, Ruby is object-oriented, like C#, and I should be able to just convert (rewrite) classes one by one. However, I will run into issues like: NCurses API. I need to possibly create my own notions of a "Window", or else rewrite GUI code. It's one class, but it's BIG. Mix-Ins. These are essentially aspect-oriented development. There are a couple of solutions in .NET, like dynamic classes. What else? Also, I should mention that I want to create a C# application out of this. As much as possible, I'll dump reusable and helper code, algorithms, etc. into separate projects and generate reusable DLLs.

    Read the article

  • 2D/Isometric map algorithm

    - by Icarus Cocksson
    First of all, I don't have much experience on game development but I do have experience on development. I do know how to make a map, but I don't know if my solution is a normal or a hacky solution. I don't want to waste my time coding things, and realise they're utterly crap and lose my motivation. Let's imagine the following map. (2D - top view - A square) X: 0 to 500 Y: 0 to 500 My character currently stands at X:250,Y:400, somewhere near center of 100px above bottom and I can control him with my keyboard buttons. LEFT button does X--, UP button does Y-- etc. This one is kid's play. I'm asking this because I know there are some engines that automate this task. For example, games like Diablo 3 uses an engine. You can pretty much drag drop a rock to map, and it is automatically being placed here - making player unable to pass through/detect the collision. But what the engine exactly does in the background? Generates a map like mine, places a rock at the center, and checks it like: unmovableObjects = array('50,50'); //we placed a rock at 50,50 location if(Map.hasUnmovableObject(CurrentPlayerX, CurrentPlayerY)) { //unable to move } else { //able to move } My question is: Is this how 2D/Isometric maps are being generated or there is a different and more complex logic behind them?

    Read the article

  • Is there a maximum delay an UDP packet can have?

    - by Jens Nolte
    I am currently implementing a real-time network protocol for a multiplayer game using UDP. I am not having any technical difficulties, but as I always have to care about late UDP packets I am wondering just how late they can arrive. I have researched the topic and have not found any mention of it, so I assume there is no technical limitation, but I wonder if common network/internet architecture (or hardware) gives an effective limitation of how late a UDP packet can be delivered.

    Read the article

  • Jump handling and gravity

    - by sprawl
    I'm new to game development and am looking for some help on improving my jump handling for a simple side scrolling game I've made. I would like to make the jump last longer if the key is held down for the full length of the jump, otherwise if the key is tapped, make the jump not as long. Currently, how I'm handling the jumping is the following: Player.prototype.jump = function () { // Player pressed jump key if (this.isJumping === true) { // Set sprite to jump state this.settings.slice = 250; if (this.isFalling === true) { // Player let go of jump key, increase rate of fall this.settings.y -= this.velocity; this.velocity -= this.settings.gravity * 2; } else { // Player is holding down jump key this.settings.y -= this.velocity; this.velocity -= this.settings.gravity; } } if (this.settings.y >= 240) { // Player is on the ground this.isJumping = false; this.isFalling = false; this.velocity = this.settings.maxVelocity; this.settings.y = 240; } } I'm setting isJumping on keydown and isFalling on keyup. While it works okay for simple use, I'm looking for a better way handle jumping and gravity. It's a bit buggy if the gravity is increased (which is why I had to put the last y setting in the last if condition in there) on keyup, so I'd like to know a better way to do it. Where are some resources I could look at that would help me better understand how to handle jumping and gravity? What's a better approach to handling this? Like I said, I'm new to game development so I could be doing it completely wrong. Any help would be appreciated.

    Read the article

  • Which software to keep track of my project?

    - by Exa
    I'm about to start the first real phase of my game development which will consist of the acquisition of information, resources and the definition of where I want to go and what I will need for that. I just want to make sure that I'm prepared as best as possible before I actually start development. I don't like the thought of using Microsoft Word or Excel for my project management... I already worked with MS Project but I don't think it fits my needs. I need a software where I can easily maintain project steps, milestones, important issues, information about technologies and engines I use, as well as simple notes and thoughts I just want to write down. I usually prefer a whiteboard for stuff like that but unfortunately it's not a persistent way of storing. ;) Also writing it down the old-school way is something I can think of, but only for quick notes... Which software do you use for that? Are there commonly used programs? Is there any free software at all?

    Read the article

  • Is there an alternative to SDL 1.3 for a C++ game that should run on iOS and Android?

    - by futlib
    I've used SDL for many desktop games, always as the cross-platform glue for: Creating a window Processing input Rendering images Rendering fonts Playing sounds/music It has never disappointed me at those tasks. But when it comes to graphics, I prefer to work with the OpenGL API directly, even though all of our games are 2D. In the project I'm currently working on, I've made sure to only use the API subset supported by both OpenGL 1.3 and OpenGL 1.0, so making the thing run on Android should be easy, I thought. Turns out there is no official Android or iOS port of SDL yet. However, there's one in SDL 1.3, which is still in development. SDL 1.3 doesn't seem very appealing to me for three reasons: It's been in development for at least 4 years, and I have no idea when it will be done, not to mention stable. It's not ported to as many platforms as SDL 1.2. From what I've seen, it uses OpenGL for drawing, so I suppose the community will move away from directly using OpenGL. So I'm wondering if I should use a different library for our current project - it doesn't matter much if I need to port my existing code from SDL 1.2 to SDL 1.3 or to some other library. We're planning to release on Windows, Mac OS X, Linux, iOS and Android, so good support for these platforms is essential. Is there anything stable that does what I want?

    Read the article

  • Clutter for game GUI

    - by tjameson
    I'm pretty new to game development, having only written a simple 3d game for a class project, but I'd like to get started on a bigger project. I'm writing an MMORPG to run in both the browser (WebGL) and natively (OpenGL ES 2). In choosing a GUI toolkit, I'm trying to find a style that work work natively and would be simple to emulate in WebGL. I am considering using D or Go for writing my game, so interfacing with C++ libraries will be difficult, if not impossible. Of course, the language isn't the end goal here, so if using C++ will save considerable time, I'll bite the bullet and use that. In order to reduce the amount of code I'll have to write for the browser, I'm considering using something simple like Clutter for basic abstractions, which I think will be pretty easy to emulate (layered canvases maybe?). Does anyone have experience using Clutter for a 3d game? Note: I haven't used any game development libraries, and I only have limited experience with GUI libraries. I do have HTML+CSS experience, so maybe librocket is a viable solution?

    Read the article

  • OpenXDK Questions

    - by iamcreasy
    I was strolling around XBox development. Apart form buying a DevKit from Microsoft, another thing got my attention is called, OpenXDK which stands for Open XBox Development Kit. From their main site its pretty obvious that there hasn't been any update since 2005 but digging a little deeper, I found that in their project repository is was being updated. Last time stamp was 2009-02-15. Quick google search said, its not actually really on a good place to poke around. Many and MANY features are absent. Being a hobby project I perfectly understand. But, those results are quite old. The question is, is there anybody who has any experience with OpenXDK? If is, that is it possible to shade some light on this? about its limitations? Is this a mature project? How's the latest version and what's it capable of doing? Or should I just stay away from it?

    Read the article

  • What helped YOU learn C++? [on hold]

    - by Tips48
    So here's my attempt to not get this question closed for too subjective :P I'm a young programmer, specifically interested in Game Development. I've written my first couple games in Java, which I would consider my self intermediate-Advanced in. As I start to prepare myself for college and (hopefully) internships, I've noticed that learning C/C++ is essential to the industry. I've decided to start with C++, and so I read a couple of books that I saw were suggested. Anyway, now I have a decent understanding of the basics, but I really want to enhance my language knowledge. Instead of just asking for things to do, I was wondering what were some exercises that you did that really helped you understand the language? Preferably they would be near the beginner level. I understand that they obviously won't be directly related to Game Development, but it be nice if there were some things that I could transfer over eventually. (Specifically, I struggle with memory (pointers, etc) since there is no such concept in Java) Thanks! - Tips P.S.: Here's to hoping this isn't to subjective :P

    Read the article

  • Getting current time in milliseconds

    - by user90293423
    How to get the current time in milliseconds? I'm working on a hacking simulation game and when ever someone connects to another computer/NPC, a login screen popups with a button on the side called BruteForce. When BruteForce is clicked, what i want the program to do is, calculate how many seconds cracking the password is going to take based on the player's CPU speed but that's the easy part. The hard part is i want to enter a character in the password's box every X milliseconds based on a TimeToCrack divided by PasswordLength formula. But since i don't know how to find how many milliseconds have elapsed since the second has passed, the program waits until the CurrentTime is higher than the TimeBeforeTheLoopStarted + HowLongItTakesToTypeaCharacter which is always going to be a second. How would you handle my problems? I've commented the game breaking part. std::vector<QString> hardware = user.getHardware(); QString CPU = hardware[0]; unsigned short Speed = 0; if(CPU == "OMG SingleCore 1.8GHZ"){ Speed = 2; } const short passwordLength = password.length(); /* It's equal to 16 */ int Time = passwordLength / Speed; double TypeSpeed = Time / passwordLength; time_t t = time(0); struct tm * now = localtime(&t); unsigned short EndTime = (now->tm_sec + Time) % 60; unsigned short CurrentTime = 0; short i = passwordLength - 1; do{ t = time(0); now = localtime(&t); CurrentTime = now->tm_sec; do{ t = time(0); now = localtime(&t); }while(now->tm_sec < CurrentTime + TypeSpeed); /* Highly flawed */ /* Do this while your integer value is under this double value */ QString tempPass = password; tempPass.chop(i); ui->lineEdit_2->setText(tempPass); i--; }while(CurrentTime != EndTime);

    Read the article

  • How to change cpufreq settings in Kubuntu

    - by Mr Woody
    I have been using Kubuntu, and I would like to change the cpufreq settings. My understanding is that there is no applet for that, and I would have to do it with a script. So I run a command like this: sudo cpufreq-set -g userspace -c 0 -d 800Mhz -u 1200Mhz and when I type cpufreq-info, I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.20 GHz. The governor "userspace" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.50 GHz:70.06%, 2.50 GHz:0.97%, 2.00 GHz:4.85%, 1.60 GHz:0.35%, 1.20 GHz:2.89%, 800 MHz:20.88% (193873) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 2.00 GHz and 2.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 2.00 GHz. cpufreq stats: 2.50 GHz:83.43%, 2.50 GHz:1.03%, 2.00 GHz:4.28%, 1.60 GHz:0.01%, 1.20 GHz:1.74%, 800 MHz:9.50% (3208) which shows that everything worked well (on cpu 0). The problem is that if I run cpufreq-info again after few minutes I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:69.73%, 2.50 GHz:0.97%, 2.00 GHz:4.83%, 1.60 GHz:0.35%, 1.20 GHz:2.92%, 800 MHz:21.20% (193880) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:82.94%, 2.50 GHz:1.03%, 2.00 GHz:4.33%, 1.60 GHz:0.01%, 1.20 GHz:1.73%, 800 MHz:9.96% (3215) so it looks like some other process changed the settings. Does anyone know how to fix this? I also tried many different settings, but I get similar behavior.

    Read the article

  • Writing an OS for Motorola 68K processor. Can I emulate it? And can I test-drive OS development?

    - by ulver
    Next term, I'll need to write a basic operating system for Motorola 68K processor as part of a course lab material. Is there a Linux emulator of a basic hardware setup with that processor? So my partners and I can debug quicker on our computers instead of physically restarting the board and stuff. Is it possible to apply test-driven development technique to OS development? Code will be mostly assembly and C. What will be the main difficulties with trying to test-drive this? Any advice on how to do it?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Using a portable USB monitor in Ubuntu 13.04 (AOC e1649Fwu - DisplayLink)

    Having access to a little bit of IT hardware extravaganza isn't that easy here in Mauritius for exactly two reasons - either it is simply not available or it is expensive like nowhere. Well, by chance I came across an advert by a local hardware supplier and their offer of the week caught my attention - a portable USB monitor. Sounds cool, and the specs are okay as well. It's completely driven via USB 2.0, has a light weight, the dimensions would fit into my laptop bag and the resolution of 1366 x 768 pixels is okay for a second screen. Long story, short ending: I called them and only got to understand that they are out of stock - how convenient! Well, as usual I left some contact details and got the regular 'We call you back' answer. Surprisingly, I didn't receive a phone call as promised and after starting to complain via social media networks they finally came back to me with new units available - and *drum-roll* still the same price tag as promoted (and free delivery on top as one of their employees lives in Flic en Flac). Guess, it was a no-brainer to get at least one unit to fool around with. In worst case it might end up as image frame on the shelf or so... The usual suspects... Ubuntu first! Of course, the packing mentions only Windows or Mac OS as supported operating systems and without hesitation at all, I hooked up the device on my main machine running on Ubuntu 13.04. Result: Blackout... Hm, actually not the situation I was looking for but okay can't be too difficult to get this piece of hardware up and running. Following the output of syslogd (or dmesg if you prefer) the device has been recognised successfully but we got stuck in the initialisation phase. Oct 12 08:17:23 iospc2 kernel: [69818.689137] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 12 08:17:23 iospc2 kernel: [69818.800306] usb 2-4: device descriptor read/64, error -32Oct 12 08:17:24 iospc2 kernel: [69819.043620] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 08:17:24 iospc2 kernel: [69819.043630] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 08:17:24 iospc2 kernel: [69819.043636] usb 2-4: Product: e1649FwuOct 12 08:17:24 iospc2 kernel: [69819.043642] usb 2-4: Manufacturer: DisplayLinkOct 12 08:17:24 iospc2 kernel: [69819.043647] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 08:17:24 iospc2 kernel: [69819.046073] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 08:17:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 08:17:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 12 08:17:30 iospc2 kernel: [69825.411220] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 12 08:17:30 iospc2 kernel: [69825.498778] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 12 08:17:30 iospc2 kernel: [69825.498786] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 12 08:17:30 iospc2 kernel: [69825.498909] usbcore: registered new interface driver udl The device has been recognised as USB device without any question and it is listed properly: # lsusb...Bus 002 Device 005: ID 17e9:4107 DisplayLink ... A quick and dirty research on the net gave me some hints towards the udlfb framebuffer device for USB DisplayLink devices. By default this kernel module is blacklisted $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udl#blacklist udlblacklist udlfb and it is recommended to load it manually. So, unloading the whole udl stack and giving udlfb a shot: Oct 12 08:22:31 iospc2 kernel: [70126.642809] usbcore: registered new interface driver udlfb But still no reaction on the external display which supposedly should have been on and green. Display okay? Test run on Windows Just to be on the safe side and to exclude any hardware related defects or whatsoever - you never know what happened during delivery. I moved the display to a new position on the opposite side of my laptop, installed the display drivers first in Windows Vista (I know, I know...) as recommended in the manual, and then finally hooked it up on that machine. Tada! Display has been recognised correctly and I have a proper choice between cloning and extending my desktop. Testing whether the display is working properly - using Windows Vista Okay, good to know that there is nothing wrong on the hardware side just software... Back to Ubuntu - Kernel too old Some more research on Google and various hits recommend that the original displaylink driver has been merged into the recent kernel development and one should manually upgrade the kernel image (and both header) packages for Ubuntu. At least kernel 3.9 or higher would be necessary, and so I went out to this URL: http://kernel.ubuntu.com/~kernel-ppa/mainline/ and I downloaded all the good stuff from the v3.9-raring directory. The installation itself is easy going via dpkg: $ sudo dpkg -i linux-image-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb$ sudo dpkg -i linux-headers-3.9.0-030900_3.9.0-030900.201304291257_all.deb$ sudo dpkg -i linux-headers-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb As with any kernel upgrades it is necessary to restart the system in order to use the new one. Said and done: $ uname -r3.9.0-030900-generic And now connecting the external display gives me the following output in /var/log/syslog: Oct 12 17:51:36 iospc2 kernel: [ 2314.984293] usb 2-4: new high-speed USB device number 6 using ehci-pciOct 12 17:51:36 iospc2 kernel: [ 2315.096257] usb 2-4: device descriptor read/64, error -32Oct 12 17:51:36 iospc2 kernel: [ 2315.337105] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 17:51:36 iospc2 kernel: [ 2315.337115] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 17:51:36 iospc2 kernel: [ 2315.337122] usb 2-4: Product: e1649FwuOct 12 17:51:36 iospc2 kernel: [ 2315.337127] usb 2-4: Manufacturer: DisplayLinkOct 12 17:51:36 iospc2 kernel: [ 2315.337132] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338292] udlfb: DisplayLink e1649Fwu - serial #FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338299] udlfb: vid_17e9&pid_4107&rev_0129 driver's dlfb_data struct at ffff880117e59000Oct 12 17:51:36 iospc2 kernel: [ 2315.338303] udlfb: console enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338306] udlfb: fb_defio enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338309] udlfb: shadow enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338468] udlfb: vendor descriptor length:17 data:17 5f 01 0015 05 00 01 03 00 04Oct 12 17:51:36 iospc2 kernel: [ 2315.338473] udlfb: DL chip limited to 1500000 pixel modesOct 12 17:51:36 iospc2 kernel: [ 2315.338565] udlfb: allocated 4 65024 byte urbsOct 12 17:51:36 iospc2 kernel: [ 2315.343592] hid-generic 0003:17E9:4107.0009: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 17:51:36 iospc2 mtp-probe: checking bus 2, device 6: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 17:51:36 iospc2 mtp-probe: bus: 2, device: 6 was not an MTP deviceOct 12 17:51:36 iospc2 kernel: [ 2315.426583] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.426589] udlfb: Reallocating framebuffer. Addresses will change!Oct 12 17:51:36 iospc2 kernel: [ 2315.428338] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.428343] udlfb: set_par mode 1366x768Oct 12 17:51:36 iospc2 kernel: [ 2315.430620] udlfb: DisplayLink USB device /dev/fb1 attached. 1366x768 resolution. Using 4104K framebuffer memory Okay, that's looks more promising but still only blackout on the external screen... And yes, due to my previous modifications I swapped the blacklisted kernel modules: $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udlblacklist udl#blacklist udlfb Silly me! Okay, back to the original situation in which udl is allowed and udlfb blacklisted. Now, the logging looks similar to this and the screen shows those maroon-brown and azure-blue horizontal bars as described on other online resources. Oct 15 21:27:23 iospc2 kernel: [80934.308238] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 15 21:27:23 iospc2 kernel: [80934.420244] usb 2-4: device descriptor read/64, error -32Oct 15 21:27:24 iospc2 kernel: [80934.660822] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 15 21:27:24 iospc2 kernel: [80934.660832] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 15 21:27:24 iospc2 kernel: [80934.660838] usb 2-4: Product: e1649FwuOct 15 21:27:24 iospc2 kernel: [80934.660844] usb 2-4: Manufacturer: DisplayLinkOct 15 21:27:24 iospc2 kernel: [80934.660850] usb 2-4: SerialNumber: FJBD7HA000778Oct 15 21:27:24 iospc2 kernel: [80934.663391] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 15 21:27:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 15 21:27:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 15 21:27:25 iospc2 kernel: [80935.742407] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 15 21:27:25 iospc2 kernel: [80935.834403] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 15 21:27:25 iospc2 kernel: [80935.834416] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 15 21:27:25 iospc2 kernel: [80935.836389] usbcore: registered new interface driver udlOct 15 21:27:25 iospc2 kernel: [80936.021458] [drm] write mode info 153 Next, it's time to enable the display for our needs... This can be done either via UI or console, just as you'd prefer it. Adding the external USB display under Linux isn't an issue after all... Settings Manager => Display Personally, I like the console. With the help of xrandr we get the screen identifier first $ xrandrScreen 0: minimum 320 x 200, current 3200 x 1080, maximum 32767 x 32767LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 331mm x 207mm...DVI-0 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm   1366x768       60.0*+ and then give it the usual shot with auto-configuration. Let the system decide what's best for your hardware... $ xrandr --output DVI-0 --off$ xrandr --output DVI-0 --auto And there we go... Cloned output of main display: New kernel, new display... The external USB display works out-of-the-box with a Linux kernel > 3.9.0. Despite of a good number of resources it is absolutely not necessary to create a Device or Screen section in one of Xorg.conf files. This information belongs to the past and is not valid on kernel 3.9 or higher. Same hardware but Windows 8 Of course, I wanted to know how the latest incarnation from Redmond would handle the new hardware... Flawless! Most interesting aspect here: I did not use the driver installation medium on purpose. And I was right... not too long afterwards a dialog with the EULA of DisplayLink appeared on the main screen. And after confirmation of same it took some more seconds and the external USB monitor was ready to rumble. Well, and not only that one... but see for yourself. This time Windows 8 was the easiest solution after all. Resume I can highly recommend this type of hardware to anyone asking me. Although, it's dimensions are 15.6" it is actually lighter than my Samsung Galaxy Tab 10.1 and it still fits into my laptop bag without any issues. From now on... no more single screen while developing software on the road!

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >