Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 223/423 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Bending of track in a racing game

    - by caius
    I am trying to create a small racing game in which the track would be modeled using a BSpline curve for the path's center line and directional vectors to define the 'bending' of the track at each point. My problem is that I don't know how to calculate the correct bending / slope of the curve, in such a way that it would be optimal or at least visually nice for a car to 'bend in the corner'. My idea was to use the direction of the 2nd derivatives of the curve, however while this approach looks fine for most of the track, there are points in which the 2nd derivative makes sharp 'twists' / very quick 180 degree flips. I also read about 'knots' of bsplines, but I don't know if such 'twist' in 2nd derivatives is a knot or knots are something else. Can you tell me that using a BSpline: 1. How could I calculate a visually nice bending of a track for a racing game? 2. Is it possible to do this by using some simple calculations of centripertal force / gravity? 3. Is it possible to do this by using 1st, 2nd and 3rd derivatives of the BSpline curve? I am not looking for the 'physically correct' bending angle for the track, I would just like to create something which is visually pleasing in a simple game. I am using a framework which has a built-in class for BSpline, including support for 1st, 2nd and 3rd derivatives of the curve.

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • 3D physics engine for accurate collision handling on desktop/laptop computers (non-console)

    - by Georges Oates Larsen
    What are your suggestions for a physics engine that satisfies the following criteria? Capable of calculating collisions between multiple concave mesh-based colliders Handles many collisions going on at once (for instance one mesh being wedged between two others, which themselves may be wedged between two meshes) Does not allow for collider passthrough, even at high speeds. For instance, if I am applying force to a programmatically hinged object that makes it spin, I do not want it to pass through another rigidbody that it collides with while spinning. I have this problem using PhysX As implied before, reacts well to hinged objects, preferably has its own implementation of a hinge, but I am willing to program my own. The important part is that it has some sort of interface that guarantees accurate collision tracking even when dealing with these things Platform independent -- runs on mac as well as PC, also not tied down to specific graphics cards I think that's the best way to explain what I am looking for. Basically, I need SUPER reliable collisions. Something that can't be accomplished with a simple ray casting approach that sends a ray from the last position of the object to the current position (as this object may be potentially large and colliding with small objects via rotation) Bonus points for also including an OPEN SOURCE engine.

    Read the article

  • Costs/profit of/when starting an indie company

    - by Jack
    In short, I want to start a game company. I do not have much coding experience (just basic understanding and ability to write basic programs), any graphics design experience, any audio mixing experience, or whatever else technical. However, I do have a lot of ideas, great analytical skills and a very logical approach to life. I do not have any friends who are even remotely technical (or creative in regards to games for that matter). So now that we've cleared that up, my question is this: how much, minimally, would it cost me to start such a company? I know that a game could be developed in under half a year, which means it would have to operate for half a year prior, and that's assuming that the people working on the first project do their jobs good, don't leave game breaking bugs, a bunch of minor bugs, etc.. So how much would it cost me, and what would be the likely profit in half a year? I'm looking at minimal costs here, as to do it, I would have to sell my current apartment and buy a new, smaller one, pay taxes, and likely move to US/CA/UK to be closer to technologically advanced people (and be able to speak the language of course). EDIT: I'm looking at a small project for starters, not a huge AAA title.

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • Mapping Your Customer Experience Journey

    - by Michael Hylton
    For those who attended today’s Oracle Customer Experience Summit keynote you heard from Brian Curran talk about the strategies and best practices to implement customer experience (CX) in your organization.  He spoke about how this evolving journey begins by understanding six steps to transform your business and put your customers front and center.  Here are those key six steps: What are the strategic business objectives in your company? What are your operational objectives and KPIs necessary to measure a CX project? Build an income statement and create “what if” scenarios and see how changes impact your business’ bottom line.  Explore what keeps you from getting to your own goals for your business. Define the business objectives and opportunities you want to meet? Understand the trends and accelerators in the market?  What factors are going on in the market affect that impact your business?  Social?  Mobile?  Cloud?  Just to name a few.  Many of these trends may signal a change in the way people think about your business. What approach will you take to solve these issues?  Understand who your customer is.  How do you need to adapt your business to build relevant, personalized customer experiences. What technologies can you implement to address CX?  Does technology help you solve your problem? A great way to begin your customer experience journey is a concept called journey mapping, one of the most powerful and deceptively simple tools for unlocking CX innovation at your organization. Here is where you can learn more about how you can bring this concept into your business to drive great customer experiences.

    Read the article

  • What is the best way to exploit multicores when making multithread games?

    - by Keeper
    Many people suggest to write a program, and then start optimizing it. But I think that when it's coming to multithreading with multicore, a little think ahead is required. I've read about using threads, and experienced it myself during some courses at the university (still a student). The big question is simple, but a bit abstract: What thread related steps in game design do I need to take, before implementation? Now trying to be more specific. Let's say, as an example, that I'm making a small board game (like Monopoly) that I want to be multithreaded. My goal Is that this multithreaded game will exploit the best of the multicore system, lets say 4-6 cores (like in i7 processors). My answer to this question at the moment is, one thread for each of these four basic components: GUI User Input / Output AI (computer rival) Other game related calculations (like shortest path from A to B, or level up status change) I'm not an expert (yet!), and I'm sure there are better answers out there. Any suggestion, answer, different approach will be helpful. Some thoughts: Maybe splitting the main database is a good way.. (or total disaster.. )

    Read the article

  • PDF export printing in Internet Explorer [closed]

    - by user619804
    protected static byte[] exportReportToPdf(JasperPrint jasperPrint) throws JRException { JRPdfExporter exporter = new JRPdfExporter(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos); exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); exporter.exportReport(); return baos.toByteArray(); } We are using code like this to export a PDF document from a Jasper application. The line exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); adds JavaScript to send the PDF document directly to the printer. The expected behavior is that a print dialog will come up with a preview of the PDF document. This works fine most of the time - except I am having problems about one out of every 5-6 times in Internet Explorer 8 and Firefox. What happens is - the print preview dialog with the PDF document does not appear or it appears with a blank document in the preview window. -I've tried a number of different JavaScripts (different params to this.print() via exporter.setParameter -I've tried setting different response headers such as response.setContentType("application/pdf"); response.setHeader("Content-disposition","inline; filename=\"" + reportName + "\""); response.setContentLength(baos.size()); these did not seem to help This seems to be an IE and FF issue. Has anyone ever dealt with this problem? I need to get it to work across all browsers 100% of the time. Perhaps a different approach to accomplish the goal of sending the PDF document export directly to the printer? or a third party library that will work across browsers?

    Read the article

  • A better alternative to incompatible implementations for the same interface?

    - by glenatron
    I am working on a piece of code which performs a set task in several parallel environments where the behaviour of the different components in the task are similar but quite different. This means that my implementations are quite different but they are all based on the relationships between the same interfaces, something like this: IDataReader -> ContinuousDataReader -> ChunkedDataReader IDataProcessor -> ContinuousDataProcessor -> ChunkedDataProcessor IDataWriter -> ContinuousDataWriter -> ChunkedDataWriter So that in either environment we have an IDataReader, IDataProcessor and IDataWriter and then we can use Dependency Injection to ensure that we have the correct one of each for the current environment, so if we are working with data in chunks we use the ChunkedDataReader, ChunkedDataProcessor and ChunkedDataWriter and if we have continuous data we have the continuous versions. However the behaviour of these classes is quite different internally and one could certainly not go from a ContinuousDataReader to the ChunkedDataReader even though they are both IDataProcessors. This feels to me as though it is incorrect ( possibly an LSP violation? ) and certainly not a theoretically correct way of working. It is almost as though the "real" interface here is the combination of all three classes. Unfortunately in the project I am working on with the deadlines we are working to, we're pretty much stuck with this design, but if we had a little more elbow room, what would be a better design approach in this kind of scenario?

    Read the article

  • Mechanics of reasoning during programming interviews

    - by user129506
    This is not the usual "I don't want to write code during an interview", in this question the assumption is that I need to write code during an interview (think about the level of rewriting the quicksort or mergesort from scratch) I know how the algorithm work or I have a basic idea of how I should start working from there, i.e. I don't remember the algorithm by heart I noticed that even on a whiteboard, I always end up writing bugged code or code that doesn't compile. If there's a typo, whatever I usually live with that.. but when there's a crash due to some uncaught particular case I end up losing confidence in my skills. I realize that perhaps interviewers might want to look at how I write code and/or how I solve problems rather than proof-compiling my whiteboard code, but I'd like to ask how should I approach the above problem in mental terms, i.e. what mental steps should I follow when writing code for an interview with the two bullet points above. There must be a unique and agreed series of steps I should follow to avoid getting stuck/caught into particular exception cases (limit cases) that might end up wasting my time and my energies rather than focusing on the overall algorithm for the general case. I hope I made my point clear

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Commands in Task-It - Part 1

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010. In recent blog posts, like my MVVM post, I used Commands to invoke actions, like Saving a record. In this rather simplistic sample I will talk about the basics of Commands, and in my next post will get deeper into it. What is a Command? I remember the first time a UI designer used the word "command" I wasn't really sure what she was referring to. I later realized that it is just a term that is used to represent some UI control that can invoke an action, like a Button, HyperlinkButton, RadMenuItem, RadRadioButton, etc. Why should we use Commands? I'm sure you're familiar with the code behind approach of handling events. For example, if you had a Button and a RadMenuItem that ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • Video freezes when larger than a certain size

    - by Poyan
    I have a weird problem playing video files. I have a fairly new (a few years old) graphic card that should and has played up to 1080p video files without much a lot of trouble. I seem to be having a weird issue where the video plays back fine if the window is below a certain size, but as soon as the windowsize increases, the picture freezes. The audio still plays. Usually this is mitigated by killing som applications. Especially GTK (?) apps seem to cause this lag, like having a Nautilus window open usually triggers this error for me. With hardware rendering I get the error regardless of application. I have tried enabling software rendering in VLC and that eliminates the problem (but I get other problems using software rendering, so I've avoided it). I know this is pretty fuzzy, but I'm not sure how to approach this problem. Do you have any idea? EDIT: Some more information. I have the correct and latest drivers installed. My graphics card is a Nvidia 7600 GT.

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • BUILD 2013 Sessions&ndash;Building Great Windows Phone UI in XAML

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionsndashbuilding-great-windows-phone-ui-in-xaml.aspx Even the simplest of smart phone apps can be a challenge to give a compelling UI regardless of the platform.  Windows Phone and XAML are no exception.  That is what got my interest in this session by Shawn Oster.  He took a checklist type approach to the subject is good considering that is about the only way that many us get things done. Shawn started out giving us a set of bad design/good design examples.  They very effectively showed how good design gives a sense of professionalism to your app that could determine if your wonderful idea actually makes money is DOA. I won’t go over all his points since you will be able to get the session online, but a few of his checklist points included design from the beginning instead of as an afterthought, not being afraid to leave white space and making sure your application elegantly supports both landscape and portrait modes.  The many gems make this a must watch for any developers who struggle with visual design. del.icio.us Tags: BUILD 2013,Windows Phone,XAML,Design

    Read the article

  • How to customize the initrd embedded in or coming with the kernel image

    - by STATUS_ACCESS_DENIED
    I would like to add some tools and not just kernel modules into the initrd (initramfs-based). Now I'm aware of how to unpack and how to pack the initrd with cpio and have even written a hook for /etc/initramfs-tools/hooks in the past to integrate a third-party kernel module. However, while the available script libraries seem to be geared towards the integration of modules, none of them seems to be for integration of other entities (in particular programs and their dependencies). What options do I have to automate the integration of some useful tools for recovery into the initrd? I'm talking about the "rescue" system that the system drops into if it is unable to mount the root drive given to it by the boot loader. Please note that I don't want the SquashFS approach as is used for Live-CDs because for the issue at hand it will be by far sufficient to include some relatively small tools that aid in recovery of the system (when it gets stuck in initrd and can't boot further). Also, the machines when they run into the issue that we have had in the past tend to boot into the rescue system, but there a few tools are missing to kick the system back on trail ...

    Read the article

  • Integrated Reporting Is Getting Closer

    - by Evelyn Neumayr
    By John O’Rourke, Vice President, Product Marketing, Oracle Oracle recently sponsored a webcast on CFO.com titled:  The CFO Playbook on Integrated Reporting: Integrating Sustainability into Financial Disclosures which focused on why top companies in the U.S. and overseas are incorporating sustainability content into their annual reports and other financial disclosures.  The webcast speakers, James Margolis, partner with Environmental Resources Management (ERM), a global provider of environmental, health, safety, risk and sustainability consulting services (EHSS) and Mike Wallace, Director of the Global Reporting Initiative's Focal Point USA, discussed the benefits of integrating sustainability reporting with traditional financial reporting. They noted how investors, corporate directors, lenders and most recently, the Securities and Exchange Commission, use this information to better understand, benchmark and value companies. They also talked about the November 2012 release of an Integrated Reporting Framework by the International Integrated Reporting Council (IIRC).  Read the press release and link to the framework here.  The shift towards integrated financial and sustainability reporting is gaining momentum with a number of global stock exchanges endorsing this approach in 2012.  Visit these links to listen to the webcast and download the slides. You can also view a demonstration of Oracle's solution for integrated financial and sustainability reporting. If you’re interested in learning more about this and Oracle’s other sustainability reporting solutions, click here. If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

  • how to organize rendering

    - by Irbis
    I use a deferred rendering. During g-buffer stage my rendering loop for a sponza model (obj format) looks like this: int i = 0; int sum = 0; map<string, mtlItem *>::const_iterator itrEnd = mtl.getIteratorEnd(); for(map<string, mtlItem *>::const_iterator itr = mtl.getIteratorBegin(); itr != itrEnd; ++itr) { glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, itr->second->map_KdId); glDrawElements(GL_TRIANGLES, indicesCount[i], GL_UNSIGNED_INT, (GLvoid*)(sum * 4)); sum += indicesCount[i]; ++i; glBindTexture(GL_TEXTURE_2D, 0); } I sorted faces based on materials. I switch only a diffuse texture but I can place there more material properties. Is it a good approach ? I also wonder how to handle a different kind of materials, for example: some material use a normal map, other doesn't use. Should I have a different shaders for them ?

    Read the article

  • Entity Framework and layer separation

    - by Thomas
    I'm trying to work a bit with Entity Framework and I got a question regarding the separation of layers. I usually use the UI - BLL - DAL approach and I'm wondering how to use EF here. My DAL would usually be something like GetPerson(id) { // some sql return new Person(...) } BLL: GetPerson(id) { Return personDL.GetPerson(id) } UI: Person p = personBL.GetPerson(id) My question now is: since EF creates my model and DAL, is it a good idea to wrap EF inside my own DAL or is it just a waste of time? If I don't need to wrap EF would I still place my Model.esmx inside its own class library or would it be fine to just place it inside my BLL and work some there? I can't really see the reason to wrap EF inside my own DAL but I want to know what other people are doing. So instead of having the above, I would leave out the DAL and just do: BLL: GetPerson(id) { using (TestEntities context = new TestEntities()) { var result = from p in context.Persons.Where(p => p.Id = id) select p; } } What to do?

    Read the article

  • Handling deleted users - separate or same table?

    - by Alan Beats
    The scenario is that I've got an expanding set of users, and as time goes by, users will cancel their accounts which we currently mark as 'deleted' (with a flag) in the same table. If users with the same email address (that's how users log in) wish to create a new account, they can signup again, but a NEW account is created. (We have unique ids for every account, so email addresses can be duplicated amongst live and deleted ones). What I've noticed is that all across our system, in the normal course of things we constantly query the users table checking the user is not deleted, whereas what I'm thinking is that we dont need to do that at all...! [Clarification1: by 'constantly querying', I meant that we have queries which are like: '... FROM users WHERE isdeleted="0" AND ...'. For example, we may need to fetch all users registered for all meetings on a particular date, so in THAT query, we also have FROM users WHERE isdeleted="0" - does this make my point clearer?] (1) continue keeping deleted users in the 'main' users table (2) keep deleted users in a separate table (mostly required for historical book-keeping) What are the pros and cons of either approach?

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >