Search Results

Search found 48159 results on 1927 pages for 'event based programming'.

Page 675/1927 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • Is there a way to disable Windows automatically choosing folder templates?

    - by Scott Leis
    Windows Vista (and I guess Win 7 though I haven't used it) sometimes automatically applies templates to folders opened in Explorer based on their content. E.g. a folder with photos automatically gets the columns "Date taken", "Tags", and "Rating". Is there a way to disable the automatic application of this feature while still allowing manual customisation? I really want to apply the "All Items" template to all folders on all drives, and have it stay that way except on a few folders that I manually customise. The reason I want to disable the automatic behaviour is that it's often just wrong. I have folders with over 100 files where Windows has automatically applied a template based on the types of one or two of those files, and the template is wrong for everything else in the same folder.

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • Procedural terrains in 3D: what has been done ? Are there common algo and/or theories about it ?

    - by jokoon
    Besides programming, modeling an environment takes a great deal of time. I don't know about the work time involved, for example, in a WoW dungeon level, or other beautiful city-like, future environment, jungles, fantasy, etc, but this kind of work is made from scratch by artists. What are the techniques involved in the TorchLight level randomizer, and does other titles have similarities with this ? Is there a family name for such techniques ?

    Read the article

  • What should come first: testing or code review?

    - by Silver Light
    Hello! I'm quite new to programming design patterns and life cycles and I was wondering, what should come first, code review or testing, regarding that those are done by separate people? From the one side, why bother reviewing code if nobody checked if it even works? From the other, some errors can be found early, if you do the review before testing. Which approach is recommended and why? Thank you!

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • Observable Adapter

    - by Roman Schindlauer
    .NET 4.0 introduced a pair of interfaces, IObservable<T> and IObserver<T>, supporting subscriptions to and notifications for push-based sequences. In combination with Reactive Extensions (Rx), these interfaces provide a convenient and uniform way of describing event sources and sinks in .NET. The StreamInsight CTP refresh in November 2009 included an Observable adapter supporting “reactive” event inputs and outputs.   While we continue to believe it enables an important programming model, the Observable adapter was not included in the final (RTM) release of Microsoft StreamInsight 1.0. The release takes a dependency on .NET 3.5 but for timing reasons could not take a dependency on .NET 4.0. Shipping a separate copy of the observable interfaces in StreamInsight – as we did in the CTP refresh – was not a viable option in the RTM release.   Within the next months, we will be shipping another preview of the Observable adapter that targets .NET 4.0. We look forward to gathering your feedback on the new adapter design! We plan to include the Observable adapter implementation into the product in a future release of Microsoft StreamInsight. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is there a framework for describing object oriented communication standards/protocols?

    - by martin
    Currently I'm dealing with the development of specifications for communication standards/protocols for b2b-integration based on object oriented models. I.e. if you take a look at the healthcare domain there is HL7v3 with its HDF. Now I ask if there is a more generic framework, that describes how a specification for a communication standard should be developed. For b2b-integration I want to describe a communication standard based on uml models for a broad domain. My thought was to divide the domain into subdomains and derive message type from the resulting model. There is already a given framework, but I want to compare it to another framework. My idea is to compare them using a generic framework. It should describe several levels. Does anybody know such a framework? I have searched a while on google scholar, but haven't an appropiate framework yet. The only thing I have found is ebXML, but I think it is not exactly what I need.

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • Best way to block a country by IP address?

    - by George Edison
    I have a website that needs to block a particular country based on IP address. I am more than aware that IP-based blocking is not a foolproof method for blocking visitors, but it is a necessary step in the right direction. Since I'm using PHP, what I would do is use a GeoIP database like geoplugin.net. However, I'm curious to know if there's a better way of doing this. The website is on a shared webserver (I don't have root access) and it is running Apache on centOS. I guess my question is "can an .htaccess file be configured to block by IP using an external source to lookup IP addresses."

    Read the article

  • Drawing a sprite or text causes the OpenGl rendering to 'disappear' in SFML

    - by Ken
    I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf::RenderWindow window(sf::VideoMode(viewport.width,viewport.height,32), "SFML Window"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,viewport.width,0,viewport.height,0,1); while (window.pollEvent(Event)) { //event handling... //begin drawing glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBegin(GL_TRIANGLES); glColor3f(col.x,col.y,col.z); for(int i=0;i<3;i++) glVertex2f(pos.x+verts[i].x,pos.y+verts[i].y); glEnd(); // adding this line causes all the previous opengl triangles not to appear window.draw("Sometext"); window.display(); }

    Read the article

  • Join the Customer Experience Revolution

    - by Divya Malik
    By Suzy Meriwhether Customers want simple, consistent, and relevant experiences across all touchpoints and devices. Creating a great customer experience means delivering these qualities consistently over time across the entire customer lifecycle and enable businesses to attract more, retain more and sell more. Exceptional customer experiences create the loyalty, advocacy, and repeat business that drives success. Most successful companies would say that they try to create a good customer experience and have already invested in the systems, people, and training to develop it. So what’s missing? Why is it so much more difficult to meet customer expectations every day, in every way? How can you learn more? Join Oracle for a Live Event: Customer Experience Online Forum Participate in the Customer Experience Online Forum to hear from Bruce Temkin, a leading expert in customer experience, Anthony Lye, SVP of Oracle CRM, Marriott International, Nikon and other thought leaders to learn about the ROI of customer experience, what strategies leading brands use to win over customers, and how Oracle solutions can help you succeed in the Experience Revolution. I encourage you to register now for the half-day live event.

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • Today's Links (6/28/2011)

    - by Bob Rhubart
    Connecting People, Processes, and Content: An Online Event | Brian Dirking Dirking shares information on an Oracle Online Forum coming up on July 19. Social Relationships don't count until they count | Steve Jones "It's actually the interactions that matter to back up the social experience rather than the existence of a social link," says Jones. ORACLENERD: KScope 11: Cary Millsap Commenting on Cary Millsap's KScope presentation on Agile, Oracle ACE Chet Justice says, "I fight with methodology on a daily basis, mostly resulting in me hitting my head against the closest wall." The Sage Kings of Antiquity | Richard Veryard "Given that the empirical evidence for enterprise architecture is fairly weak, anecdotal and inconclusive, we are still more dependent than we might like on the authority of experts," says Veryard, "whether this be semi-anonymous committees (such as TOGAF) or famous consultants (such as Zachman)." Oracle Business Intelligence Blog: New BI Mobile Demos "These are short videos that showcase some of the capabilities in our mobile app," says Abhinav Agarwal. "One focuses on the Oracle BI platform, while the other showcases what is possible with the mobile app accessing Oracle Business Intelligence Applications, like Financial Analytics." MySQL HA Events in the UK, Germany & France | Oracle's MySQL Blog Oracle is running MySQL High Availability breakfast seminars in London (June 29), Düsseldorf (July 13) and Paris (September 7). "During these free seminars, we will review the various options and technologies at your disposal to implement highly available and highly scalable MySQL infrastructures, as well as best practices in terms of architectures," says Bertrand Matthelié. VENNSTER BLOG: User Experience in Fusion apps "When I heard about the Fusion Applications User Experience efforts, I was skeptical," says Oracle ACE Director Lonneke Dikmans of Vennster "My view of Oracle and User Experience has changed drastically today." Power Your Cloud with Oracle Fusion Middleware Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing.

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • Create SQL Server Analysis Services Partitions using AMO

    When you have SSAS cubes with millions of rows of data, it is very helpful to create partitions. If you have a few cubes you could probably do this manually, but if there are many or if you want to automate this process you should look for smarter solutions such as programming the creation of partitions dynamically. NEW! Never waste another weekend deployingDeploy SQL Server changes and ASP .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try it now.

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Shift career path [on hold]

    - by Rnet
    I have been programming in web services and applications in java, python for about 6 years. Over the past two years I have been gaining a steady interest in networking and I find it very intriguing but due to non existent experience in that field I cannot pursue a serious career in that path. At this point in my career would it be wise to make the shift? What would you do/work on/study(masters?) to make a successful transition?

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available For Developers

    - by ultan o'broin
    The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. The design patterns are based on Oracle ADF components and easily implemented in Oracle JDeveloper. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments!.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Almost time to hit the road again

    - by Chris Williams
    I’ve had a few months of not much traveling, but now that the weather is improving… conference season is starting up again. That means it’s time for me to start hitting the road. In June, I have Tech Ed 2010 in New Orleans, LA. I lived in New Orleans for several years, both as military and civilian and I have a few friends still down there. I haven’t been there since before Hurricane Katrina, so I have mixed feelings about returning… but I am still looking forward to it. Also in June, I have Codestock in Knoxville, TN. Codestock is one of my favorite events, primarily because of the excellent people that speak there and also attend sessions. It’s a great mix of people and technologies. Sometime in July or August, I’m headed to Austin, TX for a couple days. I don’t know the exact date yet, but if you have an event down there in that timeframe, let me know and maybe we can sort something out. In September, I’m heading to Seattle for my first PAX (Penny Arcade Expo.)  I’m going strictly as an attendee and it looks like a LOT of fun. Really excited to check it out. Also in September, I’m headed to Omaha for the Heartland Developers Conference. This is a FANTASTIC event, and certainly one of my local favorites. (I guess local is relative, it’s about a 6 hour drive.) In addition to speaking on WP7, I’ll be doing a series of hands on labs on XNA they day before the conference starts, so that should be a lot of fun as well.   In addition to all this stuff, I have my own XNA User Group to take care of. In August, Andy “The Z-Man” Dunn is coming to speak and check out the various food on a stick offerings at the Minnesota State Fair!

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >